Job Title: DevOps Engineer
Location: Remote
Type: Full-time
About Us:
At Tese, we are committed to advancing sustainability through innovative technology solutions. Our platform empowers SMEs, financial institutions, and enterprises to achieve their Environmental, Social, and Governance (ESG) goals. We are looking for a skilled and passionate DevOps Engineer to join our team and help us build and maintain scalable, reliable, and efficient infrastructure.
Role Overview:
As a DevOps Engineer, you will be responsible for designing, implementing, and managing the infrastructure that supports our applications and services. You will work closely with our development, QA, and data science teams to ensure smooth deployment, continuous integration, and continuous delivery of our products. Your role will be critical in automating processes, enhancing system performance, and maintaining high availability.
Key Responsibilities:
- Infrastructure Management:
- Design, implement, and maintain scalable cloud infrastructure on platforms such as AWS, Google Cloud, or Azure.
- Manage server environments, including provisioning, monitoring, and maintenance.
- CI/CD Pipeline Development:
- Develop and maintain continuous integration and continuous deployment pipelines using tools like Jenkins, GitLab CI/CD, or CircleCI.
- Automate deployment processes to ensure quick and reliable releases.
- Configuration Management and Automation:
- Implement infrastructure as code (IaC) using tools like Terraform, Ansible, or CloudFormation.
- Automate system configurations and deployments to improve efficiency and reduce manual errors.
- Monitoring and Logging:
- Set up and manage monitoring tools (e.g., Prometheus, Grafana, ELK Stack) to track system performance and troubleshoot issues.
- Implement logging solutions to ensure effective incident response and system analysis.
- Security and Compliance:
- Ensure systems are secure and compliant with industry standards and regulations.
- Implement security best practices, including identity and access management, network security, and vulnerability assessments.
- Collaboration and Support:
- Work closely with development and QA teams to support application deployments and troubleshoot issues.
- Provide support for infrastructure-related inquiries and incidents.
Qualifications:
- Education:
- Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
- Experience:
- 3-5 years of experience in DevOps, system administration, or related roles.
- Hands-on experience with cloud platforms such as AWS, Google Cloud Platform, or Azure.
- Technical Skills:
- Proficiency in scripting languages like Bash, Python, or Ruby.
- Strong experience with containerization technologies like Docker and orchestration tools like Kubernetes.
- Knowledge of configuration management tools (Ansible, Puppet, Chef).
- Experience with CI/CD tools (Jenkins, GitLab CI/CD, CircleCI).
- Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK Stack).
- Understanding of networking concepts and security best practices.
- Soft Skills:
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration abilities.
- Ability to work in a fast-paced environment and manage multiple tasks.
Preferred Qualifications:
- Experience with infrastructure as code (IaC) tools like Terraform or CloudFormation.
- Knowledge of microservices architecture and serverless computing.
- Familiarity with database administration (SQL and NoSQL databases).
- Experience with Agile methodologies and working in a Scrum or Kanban environment.
- Passion for sustainability and interest in ESG initiatives.
Benefits:
- Competitive salary and benefits package,and performance bonuses.
- Flexible working hours and remote work options.
- Opportunity to work on impactful projects that promote sustainability.
- Professional development opportunities, including access to training and conferences.
About Tese Capital Limited
About
At Tese, we see a direct link between business and our planet's vitality. We're here to reduce the friction between reporting, financing, and improving sustainability performance by providing SMEs with the knowledge, access, and connections that they need to succeed. Tese brings everything into one ecosystem. From managing your ESG performance, purchasing real, tangible solutions to staying up to date on the green market and getting access to sustainable finance, empowering every business to benefit from earth-centric solutions.
Candid answers by the company
We understand that when our businesses become disconnected from nature, risk follows. But we imagine a world - not far from now - where wasteful enterprises evolve into earth-centric allies.
It starts with (re)connection. People connecting to people. People connecting over their communities and livelihoods; (re)connecting their businesses to nature.
This (re)connection offers learning. And in the fertile soil of sustainable technology, fresh approaches begin to sprout. Ways to clean up and do better, supplemented with sustainable financing to unlock your productivity.
And when others see the difference that one-planet solutions make to your enterprise, they’ll want in too; creating more opportunities. Opportunities to scale up, to collaborate and grow. To protect what we have, and restore what we’ve lost.
Similar jobs
ABOUT THE TEAM:
The production engineering team is responsible for the key operational pillars (Reliability, Observability, Elasticity, Security, and Governance) of the Cloud infrastructure at Swiggy. We thrive to excel & continuously improve on these key operational pillars. We design, build, and operate Swiggy’s cloud infrastructure and developer platforms, to provide a seamless experience to our internal and external consumers.
What qualities are we looking for:
10+ years of professional experience in infrastructure, production engineering
Strong design, debugging, and problem-solving skills
Proficiency in at least one programming language like Python, GoLang or Java.
B Tech/M Tech in Computer Science or equivalent from a reputed college.
Hands-on experience with AWS and Kubernetes or similar cloud/infrastructure platforms
Hands-on with DevOps principles and practices ( Everything-as-a-code, CI/CD, Test everything, proactive monitoring etc)
Deep understanding of OS/virtualization/Containerization, network protocols & concepts
Exposure to modern-day infrastructure technologies, expertise in building and operating distributed systems.
Hands-on coding on any of the languages like Python or GoLang.
Familiarity with software engineering practices including unit testing, code reviews, and design documentation.
Technically mentor and lead the team towards engineering and operational excellence
Act like an owner, strive for excellence.
What will you get to do here?
Be part of a Culture where Customer Obsession, Ownership, Teamwork, Bias for Action and Insist on High standards are a way of life
Coming up with the best practices to help the team achieve their technical tasks and continually thrive in improving the technology of the team
Be a hands-on engineer, ensure frameworks/infrastructure built is well designed, scalable & are of high quality.
Build and/or operate platforms that are highly available, elastic, scalable, operable and observable
Experiment with new & relevant technologies and tools, and drive adoption while measuring yourself on the impact you can create.
Implementation of long-term technology vision for the team.
Build/Adapt and implement tools that empower the Swiggy engineering teams to self-manage the infrastructure and services owned by them.
You will identify, articulate, and lead various long-term tech vision, strategies, cross-cutting initiatives and architecture redesigns.
Design systems and make decisions that will keep pace with the rapid growth of Swiggy. Document your work and decision-making processes, and lead presentations and discussions in a way that is easy for others to understand.
Creating architectures & designs for new solutions around existing/new areas. Decide technology & tool choices for the team.
True to its name, is on a mission to unlock $100+ billion of trapped working capital in the economy by creating India’s largest marketplace for invoice discounting to solve the day-to-day. problems faced by businesses. Founded by ex-BCG and ISB / IIM alumni, and backed by SAIF Partners, CashFlo helps democratize access to credit in a fair and transparent manner. Awarded Supply Chain Finance solution of the year in 2019, CashFlo creates a win-win ecosystem for Buyers, suppliers
and financiers through its unique platform model. CashFlo shares its parentage with HCS Ltd., a 25 year old, highly reputed financial services company has raised over Rs. 15,000 Crores in the market till date,
for over 200 corporate clients.
Our leadership team consists of ex-BCG, ISB / IIM alumni with a team of industry veterans from financial services serving as the advisory board. We bring to the table deep insights in the SME lending
space, based on 100+ years of combined experience in Financial Services. We are a team of passionate problem solvers, and are looking for like-minded people to join our team.
The challenge
Solve a complex $300+ billion problem at the cutting edge of Fintech innovation, and make a tangible difference to the small business landscape in India.Find innovative solutions for problems in a yet to be discovered market.
Key Responsibilities
As an early team member, you will get a chance to set the foundations of our engineering culture. You will help articulate our engineering
principles and help set the long-term roadmap. Making decisions on the evolution of CashFlo's technical architectureBuilding new features end to end, from talking to customers to writing code.
Our Ideal Candidate Will Have
3+ years of full-time DevOps engineering experience
Hands-on experience working with AWS services
Deep understanding of virtualization and orchestration tools like Docker, ECS
Experience in writing Infrastructure as Code using tools like CDK, Cloud formation, Terragrunt or Terraform
Experience using centralized logging & monitoring tools such as ELK, CloudWatch, DataDog
Built monitoring dashboards using Prometheus, Grafana
Built and maintained code pipelines and CI/CD
Thorough knowledge of SDLC
Been part of teams that have maintained large deployments
About You
Product-minded. You have a sense for great user experience and feel for when something is off. You love understanding customer pain points
and solving for them.
Get a lot done. You enjoy all aspects of building a product and are comfortable moving across the stack when necessary. You problem solve
independently and enjoy figuring stuff out.
High conviction. When you commit to something, you're in all the way. You're opinionated, but you know when to disagree and commit.
Mediocrity is the worst of all possible outcomes.
Whats in it for me
Gain exposure to the Fintech space - one of the largest and fastest growing markets in India and globally
Shape India’s B2B Payments landscape through cutting edge technology innovation
Be directly responsible for driving company’s success
Join a high performance, dynamic and collaborative work environment that throws new challenges on a daily basis
Fast track your career with trainings, mentoring, growth opportunities on both IC and management track
Work-life balance and fun team events
- DevOps Pre Sales Consultant
Ashnik
Established in 2009, Ashnik is a leading open source solutions and consulting company in South East Asia and India, headquartered in Singapore. We enable digital transformation for large enterprises through our design, architecting, and solution skills. Over 100 large enterprises in the region have acknowledged our expertise in delivering solutions using key open source technologies. Our offerings form critical part of Digital transformation, Big Data platform, Cloud and Web acceleration and IT modernization. We represent EDB, Pentaho, Docker, Couchbase, MongoDB, Elastic, NGINX, Sysdig, Redis Labs, Confluent, and HashiCorp as their key partners in the region. Our team members bring decades of experience in delivering confidence to enterprises in adopting open source software and are known for their thought leadership.
For more details, kindly visit www.ashnik.com
THE POSITION Ashnik is looking for experienced technology consultant to work in DevOps team in sales function. The primary area of focus will be Microservices, CI/CD pipeline, Docker, Kubernetes, containerization and container security. A person in this role will be responsible for leading technical discussions with customers and partners and helping them arrive at final solution
RESPONSIBILITIES
- Own pre-sales or sales engineering responsibility to design, present, deliver technical solution
- Be the point of contact for all technical queries for Sales team and partners
- Build full-fledged solution proposals with details of implementation and scope of work for customers
- Contribute in technical writings through blogs, whitepapers, solution demos
- Make presentations at events and participate in events.
- Conduct customer workshops to educate them about features in Docker Enterprise Edition
- Coordinate technical escalations with principal vendor
- Get an understanding of various other components and considerations involved in the areas mentioned above
- Be able to articulate value of technology vendor’s products that Ashnik is partnering with eg. Docker, Sysdig, Hashicorp, Ansible, Jenkins etc.
- Work with partners and sales team for responding to RFPs and tender
QUALIFICATION AND EXPERIENCE
- Engineering or equivalent degree
- Must have at least 8 years of experience in IT industry designing and delivering solutions
- Must have at least 3 years of hand-on experience of Linux and Operating system
- Must have at least 3 years of experience of working in an environment with highly virtualized or cloud based infrastructure
- Must have at least 2 years hands on experience in CI/CD pipeline, micro-services, containerization and kubernetes
- Though coding is not needed in this role, the person should have ability to understand and debug code if required
- Should be able to explain complex solution in simpler ways
· Should be ready to travel 20-40% in a month - Should be able to engage with customers to understand the fundamental/driving requirement
DESIRED SKILLS
- Past experience of working with Docker and/or Kubernetes at Scale
- Past experience of working in a DevOps team
- Prior experience in Pre-sales role
Salary: upto 20L
Location: Mumbai
What you will do:
- Handling Configuration Management, Web Services Architectures, DevOps Implementation, Build & Release Management, Database management, Backups and monitoring
- Logging, metrics and alerting management
- Creating Docker files
- Performing root cause analysis for production errors
What you need to have:
- 12+ years of experience in Software Development/ QA/ Software Deployment with 5+ years of experience in managing high performing teams
- Proficiency in VMware, AWS & cloud applications development, deployment
- Good knowledge in Java, Node.js
- Experience working with RESTful APIs, JSON etc
- Experience with Unit/ Functional automation is a plus
- Experience with MySQL, Mango DB, Redis, Rabbit MQ
- Proficiency in Jenkins. Ansible, Terraform/Chef/Ant
- Proficiency in Linux based Operating Systems
- Proficiency of Cloud Infrastructure like Dockers, Kubernetes
- Strong problem solving and analytical skills
- Good written and oral communication skills
- Sound understanding in areas of Computer Science such as algorithms, data structures, object oriented design, databases
- Proficiency in monitoring and observability
- Preferred experience in development associated with Kafka or big data technologies understand essential Kafka components like Zookeeper, Brokers, and optimization of Kafka clients applications (Producers & Consumers). -
Experience with Automation of Infrastructure, Testing , DB Deployment Automation, Logging/Monitoring/alerting
- AWS services experience on CloudFormation, ECS, Elastic Container Registry, Pipelines, Cloudwatch, Glue, and other related services.
- AWS Elastic Kubernetes Services (EKS) - Kubernetes and containers managing and auto-scaling -
Good knowledge and hands-on experiences with various AWS services like EC2, RDS, EKS, S3, Lambda, API, Cloudwatch, etc.
- Good and quick with log analysis to perform Root Cause Analysis (RCA) on production deployments and container errors on cloud watch.
Working on ways to automate and improve deployment and release processes.
- High understanding of the Serverless architecture concept. - Good with Deployment automation tools and Investigating to resolve technical issues.
technical issues. - Sound knowledge of APIs, databases, and container-based ETL jobs.
- Planning out projects and being involved in project management decisions. Soft Skills
- Adaptability
- Collaboration with different teams
- Good communication skills
- Team player attitude
Numerator is a data and technology company reinventing market research. Headquartered in Chicago, IL, Numerator has 1,600 employees worldwide. The company blends proprietary data with advanced technology to create unique insights for the market research industry that has been slow to change. The majority of Fortune 100 companies are Numerator clients.
Job Description
What We Do and How?
We are a market research company, revolutionizing how it's done! We mix fast paced development and unique approaches to bring best practices and strategy to our technology. Our tech stack is deep, leveraging several languages and frameworks including Python, C#, Java, Kotlin, React, Angular, and Django among others. Our engineering hurdles sit at the intersection of technologies ranging from mobile, computer vision and crowdsourcing, to machine learning and big data analytics.
Our Team
From San Francisco to Chicago to Ottawa, our R&D team is comprised of talented individuals spanning across a robust tech stack. The R&D team is comprised of product, data analytics, engineers across Front End, Back End, DevOps, Business Intelligence, ETL, Data Science, Mobile Apps, and much more. Across these different groups we work towards one common goal: To build products into efficient and seamless user experiences that help our clients succeed.
Numerator is looking for a Infrastructure Engineer to join our growing team. This is a unique opportunity where you will get a chance to work with an established and rapidly evolving platforms that handles millions of requests and massive amounts of data. In this position, you will be responsible for taking on new initiatives to automate, enhance, maintain, and scale services in a rapidly-scaling SaaS environment.
As a member of our team, you will make an immediate impact as you help build out and expand our technology platforms across several software products. This is a fast-paced role with high growth, visibility, impact, and where many of the decisions for new projects will be driven by you and your team from inception through production.
Some of the technologies we frequently use include: Terraform, Ansible, SumoLogic, Kubernetes, and many AWS-native services.
• Develop and test the cloud infrastructure to scale a rapidly growing ecosystem.
• Monitor and improve DevOps tools and processes, automate mundane tasks, and improve system reliability.
• Provide deep expertise to help steer scalability and stability improvements early in the life-cycle of development while working with the rest of the team to automate existing processes that deploy, test, and lead our production environments.
• Train teams to improve self-healing and self-service cloud-based ecosystems in an evolving AWS infrastructure.
• Build internal tools to demonstrate performance and operational efficiency.
• Develop comprehensive monitoring solutions to provide full visibility to the different platform components using tools and services like Kubernetes, Sumologic, Prometheus, Grafana.
• Identify and troubleshoot any availability and performance issues at multiple layers of deployment, from hardware, operating environment, network, and application.
• Work cross-functionally with various teams to improve Numerator’s infrastructure through automation.
• Work with other teams to assist with issue resolutions related to application configuration, deployment, or debugging.
• Lead by example and evangelize DevOps best practice within other engineering teams at Numerator.
Skills & Requirements
What you bring
• A minimum of 3 years of work experience in backend software, DevOps, or a related field.
• A passion for software engineering, automation and operations and are excited about reliability, availability and performance.
• Availability to participate in after-hours on-call support with your fellow engineers.
• Strong analytical and problem-solving mindset combined with experience troubleshooting large scale systems.
• Fundamental knowledge in networking; operating systems; package build system (IP subnets and routing, ACL’s, Core Ubuntu, PIP and NPM).
• Experience with automation technologies to build, deploy and integrate both infrastructure and applications (e.g., Terraform, Ansible).
• Experience using scripting languages like Python and *nix tools (Bash, sed/awk, Make).
• You enjoy developing and managing real-time distributed platforms and services that scale billions of requests.
• Have the ability to manage multiple systems across stratified environments.
• A deep enthusiasm for the Cloud and DevOps and keen to get other people involved.
• Experience with scaling and operationalizing distributed data stores, file systems and services.
• Running services in AWS or other cloud platforms, strong experience with Linux systems.
• Experience in modern software paradigms including cloud applications and serverless architectures.
• You look ahead to identify opportunities and foster a culture of innovation.
• BS, MS or Ph.D. in Computer Science or a related field, or equivalent work experience.
Nice to haves
• Previous experience working with a geographically distributed software engineering team.
• Experience working with Jenkins or Circle-CI
• Experience with storage optimizations and management
• Solid understanding of building scalable, highly performant systems and services
• Expertise with big data, analytics, machine learning, and personalization.
• Start-up or CPG industry experience
If this sounds like something you would like to be part of, we’d love for you to apply! Don't worry if you think that you don't meet all the qualifications here. The tools, technology, and methodologies we use are constantly changing and we value talent and interest over specific experience.
Disclaimer: We do not charge any fee for employment and the same applies to the Recruitment Partners who we work with. Numerator is an equal opportunity employer. Employment decisions are based on merit. Additionally, we do not ask for any refundable security deposit to be paid in bank accounts for employment purposes. We request candidates to be cautious of misleading communications and not pay any fee/ deposit to individuals/ agencies/ employment portals on the pretext of attending Numerator interview process or seeking employment with us. These would be fraudulent in nature. Anyone dealing with such individuals/agencies/
We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law.
About the Company
Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!
We are looking for DevOps Engineer who can help us build the infrastructure required to handle huge datasets on a scale. Primarily, you will work with AWS services like EC2, Lambda, ECS, Containers, etc. As part of our core development crew, you’ll be figuring out how to deploy applications ensuring high availability and fault tolerance along with a monitoring solution that has alerts for multiple microservices and pipelines. Come save the planet with us!
Your Role
- Applications built at scale to go up and down on command.
- Manage a cluster of microservices talking to each other.
- Build pipelines for huge data ingestion, processing, and dissemination.
- Optimize services for low cost and high efficiency.
- Maintain high availability and scalable PSQL database cluster.
- Maintain alert and monitoring system using Prometheus, Grafana, and Elastic Search.
Requirements
- 1-4 years of work experience.
- Strong emphasis on Infrastructure as Code - Cloudformation, Terraform, Ansible.
- CI/CD concepts and implementation using Codepipeline, Github Actions.
- Advanced hold on AWS services like IAM, EC2, ECS, Lambda, S3, etc.
- Advanced Containerization - Docker, Kubernetes, ECS.
- Experience with managed services like database cluster, distributed services on EC2.
- Self-starters and curious folks who don't need to be micromanaged.
- Passionate about Blue Sky Climate Action and working with data at scale.
Benefits
- Work from anywhere: Work by the beach or from the mountains.
- Open source at heart: We are building a community where you can use, contribute and collaborate on.
- Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
- Flexible timings: Fit your work around your lifestyle.
- Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
- Work Machine of choice: Buy a device and own it after completing a year at BSA.
- Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
- Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.
The expectation is to set up complete automation of CI/CD pipeline & monitoring and ensure high availability of the pipeline. The automated deployment environment can be on-prem or cloud (virtual instances, containerized and serverless). Complete test automation and ensure Security of Application as well as Infrastructure.
ROLES & RESPONSIBILITIES
Configure Jenkins with load distribution between master/slave Setting up the CI pipeline with Jenkins and Cloud(AWS or Azure) Code Build Static test (Quality & Security) Setting up Dynamic Test configuration with selenium and other tools Setting up Application and Infrastructure scanning for security. Post-deployment security plan including PEN test. Usage of RASP tool. Configure and ensure HA of the pipeline and monitoring Setting up composition analysis in the pipeline Setting up the SCM and Artifacts repository and management for branching, merging and archiving Must work in Agile environment using ALM tool like Jira DESIRED SKILLS
Extensive hands-on Continuous Integration and Continuous Delivery technology experience of .Net, Node, Java and C++ based projects(Web, mobile and Standalone). Experience configuring and managing
- ALM tools like Jira, TFS, etc.
- SCM such as GitHub, GitLab, CodeCommit
- Automation tools such as Terraform, CHEF, or Ansible
- Package repo configuration(Artifactory / Nexus), Package managers like Nuget & Chocholatey
- Database Configuration (sql & nosql), Web/Proxy Setup(IIS, Nginx, Varnish, Apache).
Deep knowledge of multiple monitoring tools and how to mine them for advanced data Prior work with Helm, Postgres, MySQL, Redis, ElasticSearch, microservices, message queues and related technologies Test Automation with Selenium / CuCumber; Setting up of test Simulators. AWS Certified Architect and/or Developer; Associate considered, Professional preferred Proficient in: Bash, Powershell, Groovy, YAML, Python, NodeJS, Web concepts such as REST APIs and Aware of MVC and SPA application design. TTD experience and quality control with Sonarqube or Checkmarx, Tics Tiobe and Coverity Thorough with Linux(Ubuntu, Debian CentOS), Docker(File/compose/volume), Kubernetes cluster setup Expert in Workflow tools: Jenkins(declarative, plugins)/TeamCity and Build Servers configuration Experience with AWS CloudFormation / CDK and delivery automation Ensure end-to-end deployments succeed and resources come up in an automated fashion Good to have ServiceNow configuration experience for collaboration
What you will get:
- To be a part of the Core-Team 💪
- A Chunk of ESOPs 🚀
- Creating High Impact by Solving a Problem at Large (No one in the World has a similar product) 💥
- High Growth Work Environment ⚙️
What we are looking for:
- An 'Exceptional Executioner' -> Leader -> Create an Impact & Value 💰
- Ability to take Ownership of your work
- Past experience in leading a team
As an Infrastructure Engineer at Navi, you will be building a resilient infrastructure platform, using modern Infrastructure engineering practices.
You will be responsible for the availability, scaling, security, performance and monitoring of the navi Cloud platform. You’ll be joining a team that follows best practices in infrastructure as code
Your Key Responsibilities
- Build out the Infrastructure components like API Gateway, Service Mesh, Service Discovery, container orchestration platform like kubernetes.
- Developing reusable Infrastructure code and testing frameworks
- Build meaningful abstractions to hide the complexities of provisioning modern infrastructure components
- Design a scalable Centralized Logging and Metrics platform
- Drive solutions to reduce Mean Time To Recovery(MTTR), enable High Availability.
What to Bring
- Good to have experience in managing large scale cloud infrastructure, preferable AWS and Kubernetes
- Experience in developing applications using programming languages like Java, Python and Go
- Experience in handling logs and metrics at a high scale.
- Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive.
Requirements:-
- Must have good understanding of Python and Shell scripting with industry standard coding conventions
- Must possess good coding debugging skills
- Experience in Design & Development of test framework
- Experience in Automation testing
- Good to have experience in Jenkins framework tool
- Good to have exposure to Continuous Integration process
- Experience in Linux and Windows OS
- Desirable to have Build & Release Process knowledge
- Experience in Automating Manual test cases
- Experienced in automating OS / FW related tasks
- Understanding of BIOS / FW QA is a strong plus
- OpenCV experience is a plus
- Good to have platform exposure
- Must have good Communication skills
- Good Leadership capabilities & collaboration capabilities, as individual will have to work with multiple teams and single handedly maintain the automation framework and enable the Manual validation team