
Roles and Responsibilities:
- Strong experience with programming microcontrollers like Arduino, ESP32, and ESP8266.
- Experience with Embedded C/C++.
- Experience with Raspberry Pi, Python, and OpenCV.
- Experience with Low power Devices would be preferred
- Knowledge about communication protocols (UART, I2C, etc.)
- Experience with Wi-Fi, LoRa, GSM, M2M, SImcom, and Quactel Modules.
- Experience with 3d modeling (preferred).
- Experience with 3d printers (preferred).
- Experience with Hardware design and knowledge of basic electronics.
- Experience with Software will be preferred.ss
Detailed Job role (daily basis) done by the IOT developer.
· Design hardware that meets the needs of the application.
· Support for current hardware, testing, and bug-fixing.
· Create, maintain, and document microcontroller code.
· prototyping, testing, and soldering
· Making 3D/CAD models for PCBs.

About BDS Services Pvt Ltd (formerly Balaji Data Solutions)
About
Similar jobs
We are seeking a highly skilled DevOps Engineer with 5–8 years of hands-on experience
to join our growing team in Bengaluru. The ideal candidate will have deep expertise in
managing Kubernetes clusters on Azure and AWS, a solid understanding of CI/CD
pipelines using Azure DevOps, and familiarity with container registries across cloud
providers. Exposure to GCP is a strong advantage as we scale across platforms. A
working knowledge of the AI/ML domain will significantly enhance your ability to
support our platform and engineering teams.
Key Responsibilities
• Design and implement Damia deployments via marketplaces (Azure/AWS).
• Design, deploy, and manage scalable, highly available Kubernetes clusters on Azure AKS and AWS EKS.
• Build a fully automated release pipeline for deployment for multiple cloud environments. They will also support delivery teams to train them in managing the deployment in the clients’ environment.
• Set up and maintain CI/CD pipelines using Azure DevOps, GitHub Actions, or similar tools.
• Manage cloud-based container registries (ACR, ECR, GCR) and ensure secure image management practices.
• Develop Infrastructure as Code (IaC) using Terraform, Bicep, or Helm to maintain consistent environments.
• Collaborate closely with ML Engineers and AI researchers to ensure infrastructure supports AI/ML pipelines.
• Monitor system performance and implement robust observability (Prometheus, Grafana, ELK, etc.).
• Work cross-functionally to manage cloud costs, security, and compliance for multi-cloud environments.
• Prepare for upcoming GCP migration initiatives and support cloud-native
development efforts.
Requirements
• 5–8 years of DevOps experience in production-grade environments.
• Strong understanding of Enterprise Linux based deployments (Ex., RHEL, Ubuntu).
• Strong expertise with Kubernetes on Azure (AKS) and AWS (EKS).
• Experience with Azure DevOps Pipelines, Repos, and Artifact management.
• Proficiency in cloud-native tools and container orchestration best practices.
• Familiarity with monitoring, logging, and alerting tools.
• Ability to write automation scripts for customer specific deployments.
• Knowledge in Python or Bash scripting is a must.
• Strong understanding of security best practices and implementations in the production deployments.
• Understanding of software package, container image scanning tools for vulnerabilities and generating regular reports for the same.
• Focus and interest to explore the latest developments in DevSecOps space and adapting to the current needs of the organization

- Proficiency in Python , Django and Other Allied Frameworks;
- Expert in designing UI/UX interfaces;
- Expert in testing, troubleshooting, debugging and problem solving;
- Basic knowledge of SEO;
- Good communication;
- Team building and good acumen;
- Ability to perform;
- Continuous learning
Devops Engineer
Roles and Responsibilities:
As a DevOps Engineer, you’ll be responsible for ensuring that our products can be seamlessly deployed on infrastructure, whether it is on-prem or on public clouds.
- Create, Manage and Improve CI / CD pipelines to ensure our Platform and Applications can be deployed seamlessly
- Evaluate, Debug, and Integrate our products with various Enterprise systems & applications
- Build metrics, monitoring, logging, configurations, analytics and alerting for performance and security across all endpoints and applications
- Build and manage infrastructure-as-code deployment tooling, solutions, microservices and support services on multiple cloud providers and on-premises
- Ensure reliability, availability and security of our infrastructure and products
- Update our processes and design new processes as needed to optimize performance
- Automate our processes in compliance with our security requirements
- Manage code deployments, fixes, updates, and related processes
- Manage environment where we deploy our product to multiple clouds that we control as well as to client-managed environments
- Work with CI and CD tools, and source control such as GIT and SVN. DevOps Engineer
Skills/Requirements:
- 2+ years of experience in DevOps, SRE or equivalent positions
- Experience working with Infrastructure as Code / Automation tools
- Experience in deploying, analysing, and debugging on multiple environments (AWS, Azure, Private Clouds, Data Centres, etc), Linux/Unix administration, Databases such as MySQL, PostgreSQL, NoSQL, DynamoDB, Cosmos DB, MongoDB, Elasticsearch and Redis (both managed instances as well as self-installed).
- Knowledge of scripting languages such as Python, PowerShell and / or Bash.
- Hands-on experience with the following is a must: Docker, Kubernetes, ELK Stack
- Hands-on experience with at least three of the following- Terraform, AWS Cloud Formation, Jenkins, Wazuh SIEM, Ansible, Ansible Tower ,Puppet ,Chef
- Good troubleshooting skills with the ability to spot issues.
- Strong communication skills and documentation skills.
- Experience with deployments with Fortune 500 or other large Global Enterprise clients is a big plus
- Experience with participating in an ISO27001 certification / renewal cycle is a plus.
- Understanding of Information Security fundamentals and compliance requirements
Work From Home
Start Up Background is preferred
Company Location: Noida
● Bachelor Degree or 5+ years of professional or experience.
● 2+ years of hands-on experience of programming in languages such as Python, Ruby,
Go, Swift, Java, .Net, C++ or similar object-oriented language.
● Experience with automating cloud native technologies, deploying applications, and
provisioning infrastructure.
● Hands-on experience with Infrastructure as Code, using CloudFormation, Terraform, or
other tools.
● Experience developing cloud native CI/CD workflows and tools, such as Jenkins,
Bamboo, TeamCity, Code Deploy (AWS) and/or GitLab.
● Hands-on experience with microservices and distributed application architecture, such
as containers, Kubernetes, and/or serverless technology.
● Hands-on experience in building/managing data pipelines, reporting & analytics.
● Experience with the full software development lifecycle and delivery using Agile
practices.
● Preferable (bonus points if you know these):
○ AWS cloud management
○ Kafka
○ Databricks
○ Gitlab CI/CD hooks
○ Python notebooks

Java with cloud
|
Core Java, SpringBoot, MicroServices |
|
- DB2 or any RDBMS database application development |
|
- Linux OS, shell scripting, Batch Processing |
|
- Troubleshooting Large Scale application |
|
- Experience in automation and unit test framework is a must |
|
- AWS Cloud experience desirable |
|
- Agile Development Experience |
|
- Complete Development Cycle ( Dev, QA, UAT, Staging) |
|
- Good Oral and Written Communication Skills |
Azure DeVops
On premises to Azure Migration
Docker, Kubernetes
Terraform CI CD pipeline
9+ Location
Budget – BG, Hyderabad, Remote , Hybrid –
Budget – up to 30 LPA
Requirements:
- Experience of managing Engineering teams in an Agile environment.
- Expert knowledge of delivering solutions in Azure cloud within a large-scale enterprise environment.
- Great understanding of DevOps principles and how they assist in taking products to market in an effective manner.
- Experience of Automation/Configuration management tools as well as working in a continuous delivery environment, monitoring and tooling.
- Knowledge and experience in Azure, Kubernetes, Containerisation, Azure DevOps pipelines.
- Experience in managing permissions in Azure DevOps.
- Working experience in Application Gateways, App Services, Front-Door, Azure Service Bus, etc.
- Troubleshooting experience in virtual/cloud infrastructures.
- Experience in delivery of projects using IAC (Infrastructure as Code).
DevOps Engineer
Notice Period: 45 days / Immediate Joining
Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure. We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms. Our engagement service is built on the DevOps standard practice and SRE model.
We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. we offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.
Key Qualifications
· 4+ years of experience as a DevOps Engineer with monitoring, troubleshooting, and diagnosing infrastructure systems.
· Experience in implementation of continuous integration and deployment pipelines using Jenkins, JIRA, JFrog, etc
· Strong experience in Linux/Unix administration.
· Experience with automation/configuration management using Puppet, Chef, Ansible, Terraform, or other similar tools.
· Expertise in multiple coding and scripting languages including Shell, Python, and Perl
· Hands-on experience Exposure to modern IT infrastructure (eg. Docker swarm/Mesos/Kubernetes/Openstack)
· Exposure to any of relation database technologies MySQL/Postgres/Oracle or any No-SQL database
· Worked on open-source tools for logging, monitoring, search engine, caching, etc.
· Professional Certificates in AWS or any other cloud is preferable
· Excellent problem solving and troubleshooting skills
· Must have good written and verbal communication skills
Key Responsibilities
Ambitious individuals who can work under their own direction towards agreed targets/goals.
Must be flexible to work on the office timings to accommodate the multi-national client timings.
Will be involved in solution designing from the conceptual stages through development cycle and deployments.
Involve development operations & support internal teams
Improve infrastructure uptime, performance, resilience, reliability through automation
Willing to learn new technologies and work on research-orientated projects
Proven interpersonal skills while contributing to team effort by accomplishing related results as needed.
Scope and deliver solutions with the ability to design solutions independently based on high-level architecture.
Independent thinking, ability to work in a fast-paced environment with creativity and brainstorming
http://www.banyandata.com" target="_blank">www.banyandata.com
Proxy SQL
Galera Cluster







