Electrum is looking for an experienced and proficient DevOps Engineer. This role will provide you with an opportunity to explore what’s possible in a collaborative and innovative work environment. If your goal is to work with a team of talented professionals that is keenly focused on solving complex business problems and supporting product innovation with technology, you might be our new DevOps Engineer. With this position, you will be involved in building out systems for our rapidly expanding team, enabling the whole engineering group to operate more effectively and iterate at top speed in an open, collaborative environment. The ideal candidate will have a solid background in software engineering and a vivid experience in deploying product updates, identifying production issues, and implementing integrations. The ideal candidate has proven capabilities and experience in risk-taking, is willing to take up challenges, and is a strong believer in efficiency and innovation with exceptional communication and documentation skills.
YOU WILL:
- Plan for future infrastructure as well as maintain & optimize the existing infrastructure.
- Conceptualize, architect, and build:
- 1. Automated deployment pipelines in a CI/CD environment like Jenkins;
- 2. Infrastructure using Docker, Kubernetes, and other serverless platforms;
- 3. Secured network utilizing VPCs with inputs from the security team.
- Work with developers & QA team to institute a policy of Continuous Integration with Automated testing Architect, build and manage dashboards to provide visibility into delivery, production application functional, and performance status.
- Work with developers to institute systems, policies, and workflows which allow for a rollback of deployments.
- Triage release of applications/ Hotfixes to the production environment on a daily basis.
- Interface with developers and triage SQL queries that need to be executed in production environments.
- Maintain 24/7 on-call rotation to respond and support troubleshooting of issues in production.
- Assist the developers and on calls for other teams with a postmortem, follow up and review of issues affecting production availability.
- Scale Electum platform to handle millions of requests concurrently.
- Reduce Mean Time To Recovery (MTTR), enable High Availability and Disaster Recovery
PREREQUISITES:
- Bachelor’s degree in engineering, computer science, or related field, or equivalent work experience.
- Minimum of six years of hands-on experience in software development and DevOps, specifically managing AWS Infrastructures such as EC2s, RDS, Elastic cache, S3, IAM, cloud trail, and other services provided by AWS.
- At least 2 years of experience in building and owning serverless infrastructure.
- At least 2 years of scripting experience in Python (Preferable) and Shell Web Application Deployment Systems Continuous Integration tools (Ansible).
- Experience building a multi-region highly available auto-scaling infrastructure that optimizes performance and cost.
- Experience in automating the provisioning of AWS infrastructure as well as automation of routine maintenance tasks.
- Must have prior experience automating deployments to production and lower environments.
- Worked on providing solutions for major automation with scripts or infrastructure.
- Experience with APM tools such as DataDog and log management tools.
- Experience in designing and implementing Essential Functions System Architecture Process; establishing and enforcing Network Security Policy (AWS VPC, Security Group) & ACLs.
- Experience establishing and enforcing:
- 1. System monitoring tools and standards
- 2. Risk Assessment policies and standards
- 3. Escalation policies and standards
- Excellent DevOps engineering, team management, and collaboration skills.
- Advanced knowledge of programming languages such as Python and writing code and scripts.
- Experience or knowledge in - Application Performance Monitoring (APM), and prior experience as an open-source contributor will be preferred.

About Electrum
About
Electrum | Home Electrification Solutions
Results Matter to us
We interviewed some of our customers who were our largest brand advocates - those that left the best reviews, referred most people, indicated they’d be open to sharing their experience. This created a story universe, or blueprint of what most resonated with our customers, as it relates to our brand and process. The goal was to then take these learnings and integrate them deeper into our process so that those values come through in everything that we do.
Join Our Talented Team: We’re looking for creative minds to help us continue building the future of clean energy.
Candid answers by the company
Electrum is an online solar and renewable energy marketplace that provides businesses and homeowners with expert advice and a concierge experience when it comes to home and business electrification. Our marketplace is integrated into Utility and OEM websites as a co-branded or white-labeled solution offered to their customers. We have a growing list of partners and new verticals to deliver on. We help home and business owners figure out just how much they can save on their home energy costs with electrification; help determine the most affordable options, and guide them through all available tax incentives and rebates.
We have a bidding platform with a national network of vetted installers and solar companies competing for our client's business. A homeowner's Energy Advisor will present the project for bid, talk to installers, pre-screen installer bids, and assist the customer to select the best-customized option for their home and budget. This process results in great deals for our customers, and even more, a great experience for our customers!
Similar jobs
Key Responsibilities
- Design, implement, and maintain CI/CD pipelines for backend, frontend, and mobile applications.
- Manage cloud infrastructure using AWS (EC2, Lambda, S3, VPC, RDS, CloudWatch, ECS/EKS).
- Configure and maintain Docker containers and/or Kubernetes clusters.
- Implement and maintain Infrastructure as Code (IaC) using Terraform / CloudFormation.
- Automate build, deployment, and monitoring processes.
- Manage code repositories using Git/GitHub/GitLab, enforce branching strategies.
- Implement monitoring and alerting using tools like Prometheus, Grafana, CloudWatch, ELK, Splunk.
- Ensure system scalability, reliability, and security.
- Troubleshoot production issues and perform root-cause analysis.
- Collaborate with engineering teams to improve deployment and development workflows.
- Optimize infrastructure costs and improve performance.
Required Skills & Qualifications
- 3+ years of experience in DevOps, SRE, or Cloud Engineering.
- Strong hands-on knowledge of AWS cloud services.
- Experience with Docker, containers, and orchestrators (ECS, EKS, Kubernetes).
- Strong understanding of CI/CD tools: GitHub Actions, Jenkins, GitLab CI, or AWS CodePipeline.
- Experience with Linux administration and shell scripting.
- Strong understanding of Networking, VPC, DNS, Load Balancers, Security Groups.
- Experience with monitoring/logging tools: CloudWatch, ELK, Prometheus, Grafana.
- Experience with Terraform or CloudFormation (IaC).
- Good understanding of Node.js or similar application deployments.
- Knowledge of NGINX/Apache and load balancing concepts.
- Strong problem-solving and communication skills.
Preferred/Good to Have
- Experience with Kubernetes (EKS).
- Experience with Serverless architectures (Lambda).
- Experience with Redis, MongoDB, RDS.
- Certification in AWS Solutions Architect / DevOps Engineer.
- Experience with security best practices, IAM policies, and DevSecOps.
- Understanding of cost optimization and cloud cost management.
About Us
At Arka Energy, we're redefining how renewable energy is experienced and adopted in homes. Our focus is on developing next-generation residential solar energy solutions through a unique combination of custom product design, intuitive simulation software, and high-impact technology. With engineering teams in Bangalore and the Bay Area, we’re committed to building innovative products that transform rooftops into smart energy ecosystems.
Our flagship product is a 3D simulation platform that models rooftops and commercial sites, allowing users to design solar layouts and generate accurate energy estimates — streamlining the residential solar design process like never before.
What We're Looking For
We're seeking a Senior DevOps Engineer who will be responsible for managing and automating cloud infrastructure and services, ensuring seamless integration and deployment of applications, and maintaining high availability and reliability. You will work closely with development and operations teams to streamline processes and enhance productivity.
Key Responsibilities
- Design and implement CI/CD pipelines using Azure DevOps.
- Automate infrastructure provisioning and configuration in the Azure cloud environment.
- Monitor and manage system health, performance, and security.
- Collaborate with development teams to ensure smooth and secure deployment of applications.
- Troubleshoot and resolve issues related to deployment and operations.
- Implement best practices for configuration management and infrastructure as code.
- Maintain documentation of processes and solutions.
Requirements
- Total relevant experience of 4 to 5 years.
- Proven experience as a DevOps Engineer, specifically with Azure.
- Experience with CI/CD tools and practices.
- Strong understanding of infrastructure as code (IaC) using tools like Terraform or ARM templates.
- Knowledge of scripting languages such as PowerShell or Python.
- Familiarity with containerization technologies like Docker and Kubernetes.
- Good to have – knowledge on AWS, Digital Ocean, GCP
- Excellent troubleshooting and problem-solving skills
- High ownership, self-starter attitude, and ability to work independently
- Strong aptitude and reasoning ability with a growth mindset
Nice to Have
· Experience working in a SaaS or product-driven startup
· Familiarity with solar industry (preferred but not required)
Job Summary:
We are looking for a highly skilled and experienced DevOps Engineer who will be responsible for the deployment, configuration, and troubleshooting of various infrastructure and application environments. The candidate must have a proficient understanding of CI/CD pipelines, container orchestration, and cloud services, with experience in AWS services like EKS, EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment. The DevOps Engineer will be responsible for monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration, among other tasks. They will also work with application teams on infrastructure design and issues, and architect solutions to optimally meet business needs.
Responsibilities:
- Deploy, configure, and troubleshoot various infrastructure and application environments
- Work with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment
- Monitor, automate, troubleshoot, secure, maintain users, and report on infrastructure and applications
- Collaborate with application teams on infrastructure design and issues
- Architect solutions that optimally meet business needs
- Implement CI/CD pipelines and automate deployment processes
- Disaster recovery and infrastructure restoration
- Restore/Recovery operations from backups
- Automate routine tasks
- Execute company initiatives in the infrastructure space
- Expertise with observability tools like ELK, Prometheus, Grafana , Loki
Qualifications:
- Proficient understanding of CI/CD pipelines, container orchestration, and various cloud services
- Experience with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc.
- Experience in monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration
- Experience in architecting solutions that optimally meet business needs
- Experience with scripting languages (e.g., Shell, Python) and infrastructure as code (IaC) tools (e.g., Terraform, CloudFormation)
- Strong understanding of system concepts like high availability, scalability, and redundancy
- Ability to work with application teams on infrastructure design and issues
- Excellent problem-solving and troubleshooting skills
- Experience with automation of routine tasks
- Good communication and interpersonal skills
Education and Experience:
- Bachelor's degree in Computer Science or a related field
- 5 to 10 years of experience as a DevOps Engineer or in a related role
- Experience with observability tools like ELK, Prometheus, Grafana
Working Conditions:
The DevOps Engineer will work in a fast-paced environment, collaborating with various application teams, stakeholders, and management. They will work both independently and in teams, and they may need to work extended hours or be on call to handle infrastructure emergencies.
Note: This is a remote role. The team member is expected to be in the Bangalore office for one week each quarter.
Job Summary:
We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure using Terraform or CloudFormation.
- Manage and orchestrate containers using Docker and Kubernetes (EKS).
- Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
- Write robust automation scripts using Python and Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimize SQLqueries for MSSQL and PostgresQL databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- Minimum 4+ years of experience in a DevOps or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
- Proficiency in Infrastructure as Code using Terraform or CloudFormation.
- Hands-on experience with Docker and Kubernetes (EKS).
- Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
- Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
- Proficient in Python and Shell scripting.
Preferred Qualifications:
- AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
- Experience working in Agile/Scrum environments.
- Strong problem-solving and analytical skills.
Job Summary:
Seeking a seasoned SQL + ETL Developer with 4+ years of experience in managing large-scale datasets and cloud-based data pipelines. The ideal candidate is hands-on with MySQL, PySpark, AWS Glue, and ETL workflows, with proven expertise in AWS migration and performance optimization.
Key Responsibilities:
- Develop and optimize complex SQL queries and stored procedures to handle large datasets (100+ million records).
- Build and maintain scalable ETL pipelines using AWS Glue and PySpark.
- Work on data migration tasks in AWS environments.
- Monitor and improve database performance; automate key performance indicators and reports.
- Collaborate with cross-functional teams to support data integration and delivery requirements.
- Write shell scripts for automation and manage ETL jobs efficiently.
Required Skills:
- Strong experience with MySQL, complex SQL queries, and stored procedures.
- Hands-on experience with AWS Glue, PySpark, and ETL processes.
- Good understanding of AWS ecosystem and migration strategies.
- Proficiency in shell scripting.
- Strong communication and collaboration skills.
Nice to Have:
- Working knowledge of Python.
- Experience with AWS RDS.
Job Description
- Implement IAM policies and configure VPCs to create a scalable and secure network for the application workloads
- Will be client point of contact for High Priority technical issues and new requirements
- Should act as Tech Lead and guide the junior members of team and mentor them
- Work with client application developers to build, deploy and run both monolithic and microservices based applications on AWS Cloud
- Analyze workload requirements and work with IT stakeholders to define proper sizing for cloud workloads on AWS
- Build, Deploy and Manage production workloads including applications on EC2 instance, APIs on Lambda Functions and more
- Work with IT stakeholders to monitor system performance and proactively improve the environment for scale and security
Qualifications
- Prefer to have at least 5+ years of IT experience implementing enterprise applications
- Should be AWS Solution Architect Associate Certified
- Must have at least 3+ years of working as a Cloud Engineer focused on AWS services such as EC2, CloudFront, VPC, CloudWatch, RDS, DynamoDB, Systems Manager, Route53, WAF, API Gateway, Elastic beanstalk, ECS, ECR, Lambda, SQS, SNS, S3 bucket, Elastic Search, DocumentDB IAM, etc.
- Must have a strong understanding of EC2 instances, types and deploying applications to the cloud
- Must have a strong understanding of IAM policies, VPC creation, and other security/networking principles
- Must have through experience in doing on prem to AWS cloud workload migration
- Should be comfortable in using AWS and other migrations tools
- Should have experience is working on AWS performance, Cost and Security optimisation
- Should be experience in implementing automated patching and hardening of the systems
- Should be involved in P1 tickets and also guide team wherever needed
- Creating Backups and Managing Disaster Recovery
- Experience in using Infra as a code automation using scripts & tools like CloudFormation and Terraform
- Any exposure towards creating CI/CD pipelines on AWS using CodeBuild, CodeDeploy, etc. is an advantage
- Experience with Docker, Bitbucket, ELK and deploying applications on AWS
- Good understanding of Containerisation technologies like Docker, Kubernetes etc.
- Should be experience in using and configuring cloud monitoring tools and ITSM ticketing tools
- Good exposure to Logging & Monitoring tools like Dynatrace, Prometheus, Grafana, ELF/EFK
Experienced with Azure DevOps, CI/CD and Jenkins.
Experience is needed in Kubernetes (AKS), Ansible, Terraform, Docker.
Good understanding in Azure Networking, Azure Application Gateway, and other Azure components.
Experienced Azure DevOps Engineer ready for a Senior role or already at a Senior level.
Demonstrable experience with the following technologies:
Microsoft Azure Platform As A Service (PaaS) product such as Azure SQL, AppServices, Logic Apps, Functions and other Serverless services.
Understanding of Microsoft Identity and Access Management products such including Azure AD or AD B2C.
Microsoft Azure Operational and Monitoring tools, including Azure Monitor, App Insights and Log Analytics.
Knowledge of PowerShell, GitHub, ARM templates, version controls/hotfix strategy and deployment automation.
Ability and desire to quickly pick up new technologies, languages, and tools
Excellent communication skills and Good team player.
Passionate about code quality and best practices is an absolute must
Must show evidence of your passion for technology and continuous learning
As DevOps Engineer, you are responsible to setup and maintain GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, Cloud security.
- Setup, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi hosting cloud environments.
- Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
- Working on Docker images and maintaining Kubernetes clusters.
- Develop and maintain the automation scripts using Ansible or other available tools.
- Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
- Working on Cloud security tools to keep applications secured.
- Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve successful implementation of integrated solutions within the portfolio.
- Required Technical and Professional Expertise.
- Minimum 4-6 years of experience in IT industry.
- Expertise in implementing and managing Devops CI/CD pipeline.
- Experience in DevOps automation tools. And Very well versed with DevOps Frameworks, Agile.
- Working knowledge of scripting using shell, Python, Terraform, Ansible or puppet or chef.
- Experience and good understanding in any of Cloud like AWS, Azure, Google cloud.
- Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
- Experience with working with ticketing tools.
- Middleware technologies knowledge or database knowledge is desirable.
- Experience and well versed with Jira tool is a plus.
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. However, we assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.
MTX Group Inc. is seeking a motivated Lead DevOps Engineer to join our team. MTX Group Inc. is a global implementation partner enabling organizations to become fit enterprises. MTX provides expertise across various platforms and technologies, including Google Cloud, Salesforce, artificial intelligence/machine learning, data integration, data governance, data quality, analytics, visualization and mobile technology. MTX’s very own Artificial Intelligence platform Maverick, enables clients to accelerate processes and critical decisions by leveraging a Cognitive Decision Engine, a collection of purpose-built Artificial Neural Networks designed to leverage the power of Machine Learning. The Maverick Platform includes Smart Asset Detection and Monitoring, Chatbot Services, Document Verification, to name a few.
Responsibilities:
- Be responsible for software releases, configuration, monitoring and support of production system components and infrastructure.
- Troubleshoot technical or functional issues in a complex environment to provide timely resolution, with various applications and platforms that are global.
- Bring experience on Google Cloud Platform.
- Write scripts and automation tools in languages such as Bash/Python/Ruby/Golang.
- Configure and manage data sources like PostgreSQL, MySQL, Mongo, Elasticsearch, Redis, Cassandra, Hadoop, etc
- Build automation and tooling around Google Cloud Platform using technologies such as Anthos, Kubernetes, Terraform, Google Deployment Manager, Helm, Cloud Build etc.
- Bring a passion to stay on top of DevOps trends, experiment with and learn new CI/CD technologies.
- Work with users to understand and gather their needs in our catalogue. Then participate in the required developments
- Manage several streams of work concurrently
- Understand how various systems work
- Understand how IT operations are managed
What you will bring:
- 5 years of work experience as a DevOps Engineer.
- Must possess ample knowledge and experience in system automation, deployment, and implementation.
- Must possess experience in using Linux, Jenkins, and ample experience in configuring and automating the monitoring tools.
- Experience in the software development process and tools and languages like SaaS, Python, Java, MongoDB, Shell scripting, Python, MySQL, and Git.
- Knowledge in handling distributed data systems. Examples: Elasticsearch, Cassandra, Hadoop, and others.
What we offer:
- Group Medical Insurance (Family Floater Plan - Self + Spouse + 2 Dependent Children)
- Sum Insured: INR 5,00,000/-
- Maternity cover upto two children
- Inclusive of COVID-19 Coverage
- Cashless & Reimbursement facility
- Access to free online doctor consultation
- Personal Accident Policy (Disability Insurance) -
- Sum Insured: INR. 25,00,000/- Per Employee
- Accidental Death and Permanent Total Disability is covered up to 100% of Sum Insured
- Permanent Partial Disability is covered as per the scale of benefits decided by the Insurer
- Temporary Total Disability is covered
- An option of Paytm Food Wallet (up to Rs. 2500) as a tax saver benefit
- Monthly Internet Reimbursement of upto Rs. 1,000
- Opportunity to pursue Executive Programs/ courses at top universities globally
- Professional Development opportunities through various MTX sponsored certifications on multiple technology stacks including Salesforce, Google Cloud, Amazon & others
*******************
Below is the Job details:
Role: DevOps Architect
Experience Level: 8-12 Years
Job Location: Hyderabad
Key Responsibilities :
Look through the various DevOps Tools/Technologies and identify the strengths and provide direction to the DevOps automation team
Out-of-box thought process on the DevOps Automation Platform implementation
Expose various tools and technologies and do POC on integration of the these tools
Evaluate Backend API's for various DevOps tools
Perform code reviews keep in context of RASUI
Mentor the team on the various E2E integrations
Be Liaison in evangelizing the automation solution currently implemented
Bring in various DevOps best Practices/Principles and participate in adoption with various app teams
Must have:
Should possess Bachelors/Masters in computer science with minimum of 8+ years of experience
Should possess minimum 3 years of strong experience in DevOps
Should possess expertise in using various DevOps tools libraries and API's (Jenkins/JIRA/AWX/Nexus/GitHub/BitBucket/SonarQube)
Should possess expertise in optimizing the DevOps stack ( Containers/Kubernetes/Monitoring )
2+ Experience in creating solutions and translate to the development team
Should have strong understanding of OOPs, SDLC (Agile Safe standards)
Proficient in Python , with a good knowledge of its ecosystems (IDEs and Frameworks)
Proficient in various cloud platforms (Azure/AWS/Google cloud platform)
Proficient in various DevOps offerings (Pivotal/OpenStack/Azure DevOps
Regards,
Talent acquisition team
Tetrasoft India
Stay home and Stay safe












