

Key Responsibilities:
· Lead the design and implementation of scalable infrastructure using IaC principles.
· Develop and manage configuration management tools primarily with Chef.
· Write and maintain automation scripts in Python to streamline infrastructure tasks.
· Build, manage, and version infrastructure using Terraform.
· Collaborate with cloud architects and DevOps teams to ensure highly available, secure, and scalable systems.
· Provide guidance and mentorship to junior engineers.
· Monitor infrastructure performance and provide optimization recommendations.
· Ensure compliance with best practices for security, governance, and automation.
· Maintain and improve CI/CD pipelines with infrastructure integration.
· Support incident management, troubleshooting, and root cause analysis for infrastructure issues.
Required Skills & Experience:
· Strong hands-on experience in:
o Chef (Cookbooks, Recipes, Automation)
o Python (Scripting, automation tasks, REST APIs)
o Terraform (Modules, state management, deployments)
· Experience in AWS services (EC2, VPC, IAM, S3, etc.)
· Familiarity with Windows administration and automation.
· Solid understanding of CI/CD processes, infrastructure lifecycle, and Git-based workflow

About Wissen Technology
About
The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains.
With offices in US, India, UK, Australia, Mexico, and Canada, we offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
Leveraging our multi-site operations in the USA and India and availability of world-class infrastructure, we offer a combination of on-site, off-site and offshore service models. Our technical competencies, proactive management approach, proven methodologies, committed support and the ability to quickly react to urgent needs make us a valued partner for any kind of Digital Enablement Services, Managed Services, or Business Services.
We believe that the technology and thought leadership that we command in the industry is the direct result of the kind of people we have been able to attract, to form this organization (you are one of them!).
Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like MIT, Wharton, IITs, IIMs, and BITS and with rich work experience in some of the biggest companies in the world.
Wissen Technology has been certified as a Great Place to Work®. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
Connect with the team
Similar jobs

Location: Malleshwaram/MG Road
Work: Initially Onsite and later Hybrid
We are committed to becoming a true DevOps house and want your help. The role will
require close liaison with development and test teams to increase effectiveness of
current dev processes. Participation in an out-of-hours emergency support rotationally
will be required. You will be shaping the way that we use our DevOps tools and
innovating to deliver business value and improve the cadence of the entire dev team.
Required Skills:
Good knowledge of Amazon Web Services suite (EC2, ECS, Loadbalancing, VPC,
S3, RDS, Lambda, Cloudwatch, IAM etc)
• Hands on knowledge on container orchestration tools – Must have: AWS ECS and Good to have: AWS EKS
• Good knowledge on creating and maintaining the infrastructure as code using Terraform
• Solid experience with CI-CD tools like Jenkins, git and Ansible
• Working experience on supporting Microservices (Deploying, maintaining and
monitoring Java web-based production applications using docker container)
• Strong knowledge on debugging production issues across the services and
technology stack and application monitoring (we use Splunk & Cloudwatch)
• Experience with software build tools (maven, and node)
• Experience with scripting and automation languages (Bash, groovy,
JavaScript, python)
• Experience with Linux administration and CVEs scan - Amz Linux, Ubuntu
• 4+ years in AWS DevOps Engineer
Optional skills:
• Oracle/SQL database maintenance experience
• Elastic Search
• Serverless/container based approach
• Automated testing of infrastructure deployments
• Experience of performance testing & JVM tuning
• Experience of a high-volume distributed eCommerce environment
• Experience working closely with Agile development teams
• Familiarity with load testing tools & process
• Experience with nginx, tomcat and apache
• Experience with Cloudflare
Personal attributes
The successful candidate will be comfortable working autonomously and
independently.
They will be keen to bring an entire team to the next level of delivering business value.
A proactive approach to problem
Classplus is India's largest B2B ed-tech start-up, enabling 1 Lac+ educators and content creators to create their digital identity with their own branded apps. Starting in 2018, we have grown more than 10x in the last year, into India's fastest-growing video learning platform.
Over the years, marquee investors like Tiger Global, Surge, GSV Ventures, Blume, Falcon, Capital, RTP Global, and Chimera Ventures have supported our vision. Thanks to our awesome and dedicated team, we achieved a major milestone in March this year when we secured a “Series-D” funding.
Now as we go global, we are super excited to have new folks on board who can take the rocketship higher🚀. Do you think you have what it takes to help us achieve this? Find Out Below!
What will you do?
• Define the overall process, which includes building a team for DevOps activities and ensuring that infrastructure changes are reviewed from an architecture and security perspective
• Create standardized tooling and templates for development teams to create CI/CD pipelines
• Ensure infrastructure is created and maintained using terraform
• Work with various stakeholders to design and implement infrastructure changes to support new feature sets in various product lines.
• Maintain transparency and clear visibility of costs associated with various product verticals, environments and work with stakeholders to plan for optimization and implementation
• Spearhead continuous experimenting and innovating initiatives to optimize the infrastructure in terms of uptime, availability, latency and costs
You should apply, if you
1. Are a seasoned Veteran: Have managed infrastructure at scale running web apps, microservices, and data pipelines using tools and languages like JavaScript(NodeJS), Go, Python, Java, Erlang, Elixir, C++ or Ruby (experience in any one of them is enough)
2. Are a Mr. Perfectionist: You have a strong bias for automation and taking the time to think about the right way to solve a problem versus quick fixes or band-aids.
3. Bring your A-Game: Have hands-on experience and ability to design/implement infrastructure with GCP services like Compute, Database, Storage, Load Balancers, API Gateway, Service Mesh, Firewalls, Message Brokers, Monitoring, Logging and experience in setting up backups, patching and DR planning
4. Are up with the times: Have expertise in one or more cloud platforms (Amazon WebServices or Google Cloud Platform or Microsoft Azure), and have experience in creating and managing infrastructure completely through Terraform kind of tool
5. Have it all on your fingertips: Have experience building CI/CD pipeline using Jenkins, Docker for applications majorly running on Kubernetes. Hands-on experience in managing and troubleshooting applications running on K8s
6. Have nailed the data storage game: Good knowledge of Relational and NoSQL databases (MySQL,Mongo, BigQuery, Cassandra…)
7. Bring that extra zing: Have the ability to program/script is and strong fundamentals in Linux and Networking.
8. Know your toys: Have a good understanding of Microservices architecture, Big Data technologies and experience with highly available distributed systems, scaling data store technologies, and creating multi-tenant and self hosted environments, that’s a plus
Being Part of the Clan
At Classplus, you’re not an “employee” but a part of our “Clan”. So, you can forget about being bound by the clock as long as you’re crushing it workwise😎. Add to that some passionate people working with and around you, and what you get is the perfect work vibe you’ve been looking for!
It doesn’t matter how long your journey has been or your position in the hierarchy (we don’t do Sirs and Ma’ams); you’ll be heard, appreciated, and rewarded. One can say, we have a special place in our hearts for the Doers! ✊🏼❤️
Are you a go-getter with the chops to nail what you do? Then this is the place for you.
What we look for:
As a DevOps Developer, you will contribute to a thriving and growing AIGovernance Engineering team. You will work in a Kubernetes-based microservices environment to support our bleeding-edge cloud services. This will include custom solutions, as well as open source DevOps tools (build and deploy automation, monitoring and data gathering for our software delivery pipeline). You will also be contributing to our continuous improvement and continuous delivery while increasing maturity of DevOps and agile adoption practices.
Responsibilities:
- Ability to deploy software using orchestrators /scripts/Automation on Hybrid and Public clouds like AWS
- Ability to write shell/python/ or any unix scripts
- Working Knowledge on Docker & Kubernetes
- Ability to create pipelines using Jenkins or any CI/CD tool and GitOps tool like ArgoCD
- Working knowledge of Git as a source control system and defect tracking system
- Ability to debug and troubleshoot deployment issues
- Ability to use tools for faster resolution of issues
- Excellent communication and soft skills
- Passionate and ability work and deliver in a multi-team environment
- Good team player
- Flexible and quick learner
- Ability to write docker files, Kubernetes yaml files / Helm charts
- Experience with monitoring tools like Nagios, Prometheus and visualisation tools such as Grafana.
- Ability to write Ansible, terraform scripts
- Linux System experience and Administration
- Effective cross-functional leadership skills: working with engineering and operational teams to ensure systems are secure, scalable, and reliable.
- Ability to review deployment and operational environments, i.e., execute initiatives to reduce failure, troubleshoot issues across the entire infrastructure stack, expand monitoring capabilities, and manage technical operations.

- Working on scalability, maintainability and reliability of company's products.
- Working with clients to solve their day-to-day challenges, moving manual processes to automation.
- Keeping systems reliable and gauging the effort it takes to reach there.
- Understanding Juxtapose tools and technologies to choose x over y.
- Understanding Infrastructure as a Code and applying software design principles to it.
- Automating tedious work using your favourite scripting languages.
- Taking code from the local system to production by implementing Continuous Integration and Delivery principles.
What you need to have:
- Worked with any one of the programming languages like Go, Python, Java, Ruby.
- Work experience with public cloud providers like AWS, GCP or Azure.
- Understanding of Linux systems and Containers
- Meticulous in creating and following runbooks and checklists
- Microservices experience and use of orchestration tools like Kubernetes/Nomad.
- Understanding of Computer Networking fundamentals like TCP, UDP.
- Strong bash scripting skills.
We are hiring candidates who are looking to work in a cloud environment and ready to learn and adapt to the evolving technologies.
Linux Administrator Roles & Responsibilities:
- 5+ or more years of professional experience with strong working expertise in Agile environments
- Deep knowledge in managing Linux servers.
- Managing Windows servers(Not Mandatory).
- Manage Web servers (Apache, Nginx).
- Manage Application servers.
- Strong background & experience in any one scripting language (Bash, Python)
- Manage firewall rules.
- Perform root cause analysis for production errors.
- Basic administration of MySQL, MSSQL.
- Ready to learn and adapt to business requirements.
- Manage information security controls with best practises and processes.
- Support business requirements beyond working hours.
- Ensuring highest uptimes of the services.
- Monitoring resource usages.
Skills/Requirements
- Bachelor’s Degree or Diploma in Computer Science, Engineering, Software Engineering or a relevant field.
- Experience with Linux-based infrastructures, Linux/Unix administration.
- Knowledge in managing databases such as My SQL, MS SQL.
- Knowledge of scripting languages such as Python, Bash.
- Knowledge in open-source technologies and cloud services like AWS, Azure is a plus. Candidates willing to learn will be preferred.
- Experience in managing web applications.
- Problem-solving attitude.
- 5+ years experience in the IT industry.

- Have 3+ years of experience in Python development
- Be familiar with common database access patterns
- Have experience with designing systems and monitoring metrics, looking at graphs.
- Have knowledge of AWS, Kubernetes and Docker.
- Be able to work well in a remote development environment.
- Be able to communicate in English at a native speaking and writing level.
- Be responsible to your fellow remote team members.
- Be highly communicative and go out of your way to contribute to the team and help others
Role
We are looking for an experienced DevOps engineer that will help our team establish DevOps practice. You will work closely with the technical lead to identify and establish DevOps practices in the company.
You will also help us build scalable, efficient cloud infrastructure. You’ll implement monitoring for automated system health checks. Lastly, you’ll build our CI pipeline, and train and guide the team in DevOps practices.
This would be a hybrid role and the person would be expected to also do some application level programming in their downtime.
Responsibilities
- Deployment, automation, management, and maintenance of production systems.
- Ensuring availability, performance, security, and scalability of production systems.
- Evaluation of new technology alternatives and vendor products.
- System troubleshooting and problem resolution across various application domains and platforms.
- Providing recommendations for architecture and process improvements.
- Definition and deployment of systems for metrics, logging, and monitoring on AWS platform.
- Manage the establishment and configuration of SaaS infrastructure in an agile way by storing infrastructure as code and employing automated configuration management tools with a goal to be able to re-provision environments at any point in time.
- Be accountable for proper backup and disaster recovery procedures.
- Drive operational cost reductions through service optimizations and demand based auto scaling.
- Have on call responsibilities.
- Perform root cause analysis for production errors
- Uses open source technologies and tools to accomplish specific use cases encountered within the project.
- Uses coding languages or scripting methodologies to solve a problem with a custom workflow.
Requirements
- Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive.
- Prior experience as a software developer in a couple of high level programming languages.
- Extensive experience in any Javascript based framework since we will be deploying services to NodeJS on AWS Lambda (Serverless)
- Extensive experience with web servers such as Nginx/Apache
- Strong Linux system administration background.
- Ability to present and communicate the architecture in a visual form.
- Strong knowledge of AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda, NAT gateway, DynamoDB)
- Experience maintaining and deploying highly-available, fault-tolerant systems at scale (~ 1 Lakh users a day)
- A drive towards automating repetitive tasks (e.g. scripting via Bash, Python, Ruby, etc)
- Expertise with Git
- Experience implementing CI/CD (e.g. Jenkins, TravisCI)
- Strong experience with databases such as MySQL, NoSQL, Elasticsearch, Redis and/or Mongo.
- Stellar troubleshooting skills with the ability to spot issues before they become problems.
- Current with industry trends, IT ops and industry best practices, and able to identify the ones we should implement.
- Time and project management skills, with the capability to prioritize and multitask as needed.

- Hands on experience in following is a must: Unix, Python and Shell Scripting.
- Hands on experience in creating infrastructure on cloud platform AWS is a must.
- Must have experience in industry standard CI/CD tools like Git/BitBucket, Jenkins, Maven, Artifactory and Chef.
- Must be good at these DevOps tools:
Version Control Tools: Git, CVS
Build Tools: Maven and Gradle
CI Tools: Jenkins
- Hands-on experience with Analytics tools, ELK stack.
- Knowledge of Java will be an advantage.
- Experience designing and implementing an effective and efficient CI/CD flow that gets code from dev to prod with high quality and minimal manual effort.
- Ability to help debug and optimise code and automate routine tasks.
- Should be extremely good in communication
- Experience in dealing with difficult situations and making decisions with a sense of urgency.
- Experience in Agile and Jira will be an add on
Engineering group to plan ongoing feature development, product maintenance.
• Familiar with Virtualization, Containers - Kubernetes, Core Networking, Cloud Native
Development, Platform as a Service – Cloud Foundry, Infrastructure as a Service, Distributed
Systems etc
• Implementing tools and processes for deployment, monitoring, alerting, automation, scalability,
and ensuring maximum availability of server infrastructure
• Should be able to manage distributed big data systems such as hadoop, storm, mongoDB,
elastic search and cassandra etc.,
• Troubleshooting multiple deployment servers, Software installation, Managing licensing etc,.
• Plan, coordinate, and implement network security measures in order to protect data, software, and
hardware.
• Monitor the performance of computer systems and networks, and to coordinate computer network
access and use.
• Design, configure and test computer hardware, networking software, and operating system
software.
• Recommend changes to improve systems and network configurations, and determine hardware or
software requirements related to such changes.


