50+ Terraform Jobs in India
Apply to 50+ Terraform Jobs on CutShort.io. Find your next job, effortlessly. Browse Terraform Jobs and apply today!
Azure DevOps engineer should have a deep understanding of container principles and hands-on experience with Docker.
They should also be able to set-up and manage clusters using Azure Kubernetes Service (AKS). Additionally, understanding of API management, Azure Key-Vaults, ACR, networking concepts like virtual networks, subnets, NSG, route tables. Awareness of any one of the software like Apigee, Kong, or APIM in Azure is a must. Strong experience with IaC technologies like Terraform, ARM/ Bicep Templates, GitHub Pipelines, Sonar etc.
- Designing DevOps strategies: Recommending strategies for migrating and consolidating DevOps tools, designing an Agile work management approach, and creating a secure development process
- Implementing DevOps development processes: Designing version control strategies, integrating source control, and managing build infrastructure
- Managing application configuration and secrets: Ensuring system and infrastructure availability, stability, scalability, and performance
- Automating processes: Overseeing code releases and deployments with an emphasis on continuous integration and delivery
- Collaborating with teams: Working with architect and developers to ensure smooth code integration and collaborating with development and operations teams to define pipelines.
- Documentation: Producing detailed Development Architecture design, setting up the DevOps tools and working together with the CI/CD specialist in integrating the automated CI and CD pipelines with those tools
- Ensuring security and compliance/DevSecOps: Managing code quality and security policies
- Troubleshooting issues: Investigating issues and responding to customer queries
- Core Skills: Azure DevOps engineer should have a deep understanding of container principles and hands-on experience with Docker. They should also be able to set-up and manage clusters using Azure Kubernetes Service (AKS). Additionally, understanding of API management, Azure Key-Vaults, ACR, networking concepts like virtual networks, subnets, NSG, route tables. Awareness of any one of the software like Apigee, Kong, or APIM in Azure is a must. Strong experience with IaC technologies like Terraform, ARM/ Bicep Templates, GitHub Pipelines, Sonar,
- Additional Skills: Self-starter and ability to execute tasks on time, Excellent communication skills, ability to come up with multiple solutions for problems, interact with client-side experts to resolve issues by providing correct pointers, excellent debugging skills, ability to breakdown tasks into smaller steps.
🚀 We're Hiring: Python AWS Fullstack Developer at InfoGrowth! 🚀
Join InfoGrowth as a Python AWS Fullstack Developer and be a part of our dynamic team driving innovative cloud-based solutions!
Job Role: Python AWS Fullstack Developer
Location: Bangalore & Pune
Mandatory Skills:
- Proficiency in Python programming.
- Hands-on experience with AWS services and migration.
- Experience in developing cloud-based applications and pipelines.
- Familiarity with DynamoDB, OpenSearch, and Terraform (preferred).
- Solid understanding of front-end technologies: ReactJS, JavaScript, TypeScript, HTML, and CSS.
- Experience with Agile methodologies, Git, CI/CD, and Docker.
- Knowledge of Linux (preferred).
Preferred Skills:
- Understanding of ADAS (Advanced Driver Assistance Systems) and automotive technologies.
- AWS Certification is a plus.
Why Join InfoGrowth?
- Work on cutting-edge technology in a fast-paced environment.
- Collaborate with talented professionals passionate about driving change in the automotive and tech industries.
- Opportunities for professional growth and development through exciting projects.
🔗 Apply Now to elevate your career with InfoGrowth and make a difference in the automotive sector!
About Lean Technologies
Lean is on a mission to revolutionize the fintech industry by providing developers with a universal API to access their customers' financial accounts across the Middle East. We’re breaking down infrastructure barriers and empowering the growth of the fintech industry. With Sequoia leading our $33 million Series A round, Lean is poised to expand its coverage across the region while continuing to deliver unparalleled value to developers and stakeholders.
Join us and be part of a journey to enable the next generation of financial innovation. We offer competitive salaries, private healthcare, flexible office hours, and meaningful equity stakes to ensure long-term alignment. At Lean, you'll work on solving complex problems, build a lasting legacy, and be part of a diverse, inclusive, and equal opportunity workplace.
About the role:
Are you a highly motivated and experienced software engineer looking to take your career to the next level? Our team at Lean is seeking a talented engineer to help us build the distributed systems that allow our engineering teams to deploy our platform in multiple geographies across various deployment solutions. You will work closely with functional heads across software, QA, and product teams to deliver scalable and customizable release pipelines.
Responsibilities
- Distributed systems architecture – understand and manage the most complex systems
- Continual reliability and performance optimization – enhancing observability stack to improve proactive detection and resolution of issues
- Employing cutting-edge methods and technologies, continually refining existing tools to enhance performance and drive advancements
- Problem-solving capabilities – troubleshooting complex issues and proactively reducing toil through automation
- Experience in technical leadership and setting technical direction for engineering projects
- Collaboration skills – working across teams to drive change and provide guidance
- Technical expertise – depth skills and ability to act as subject matter expert in one or more of: IAAC, observability, coding, reliability, debugging, system design
- Capacity planning – effectively forecasting demand and reacting to changes
- Analyze and improve efficiency, scalability, and stability of various system resources
- Incident response – rapidly detecting and resolving critical incidents. Minimizing customer impact through effective collaboration, escalation (including periodic on-call shifts) and postmortems
Requirements
- 10+ years of experience in Systems Engineering, DevOps, or SRE roles running large-scale infrastructure, cloud, or web services
- Strong background in Linux/Unix Administration and networking concepts
- We work on OCI but would accept candidates with solid GCP/AWS or other cloud providers’ knowledge and experience
- 3+ years of experience with managing Kubernetes clusters, Helm, Docker
- Experience in operating CI/CD pipelines that build and deliver services on the cloud and on-premise
- Work with CI/CD tools/services like Jenkins/GitHub-Actions/ArgoCD etc.
- Experience with configuration management tools either Ansible, Chef, Puppet, or equivalent
- Infrastructure as Code - Terraform
- Experience in production environments with both relational and NoSQL databases
- Coding with one or more of the following: Java, Python, and/or Go
Bonus
- MultiCloud or Hybrid Cloud experience
- OCI and GCP
Why Join Us?
At Lean, we value talent, drive, and entrepreneurial spirit. We are constantly on the lookout for individuals who identify with our mission and values, even if they don’t meet every requirement. If you're passionate about solving hard problems and building a legacy, Lean is the right place for you. We are committed to equal employment opportunities regardless of race, color, ancestry, religion, gender, sexual orientation, or disability.
About Databook
Databook is the world’s first AI-powered enterprise customer intelligence platform, founded in 2017 to empower enterprise sales teams with a distinct advantage. Leading companies like Microsoft, Salesforce, and Databricks rely on Databook to enhance customer engagement and accelerate revenue acquisition. Backed by Bessemer Ventures, DFJ Growth, M12 (Microsoft’s Venture fund), Salesforce Ventures, and Threshold Ventures, we operate as a customer-focused, innovative organization headquartered in Palo Alto, CA, with a global distributed team.
About Our Technology Team
The Engineering team at Databook brings together collaborative and technically passionate individuals to deliver innovative customer intelligence solutions. Led by former Google and Salesforce engineers, this group explores the full engineering lifecycle, driving impactful outcomes and offering opportunities for leadership and growth in a hyper-growth context.
The Opportunity
We're seeking a proactive and skilled Platform Engineer to enhance the reliability, scalability, and performance of our platform. This role offers the chance to collaborate closely with cross-functional teams, integrate new technologies, and advance our DevOps and SRE practices. If you're passionate about driving excellence, building robust systems, and contributing to the evolution of an AI-driven platform, join our dynamic team!
Responsibilities
- Promote best practices and standards across engineering teams to ensure platform reliability and performance.
- Collaborate with product management and engineering to enhance platform scalability and align with business goals.
- Develop and optimize backend systems and infrastructure to support platform growth.
- Implement and enhance CI/CD pipelines, automation, monitoring, and alerting systems.
- Document system performance, incidents, and resolutions, producing detailed technical reports.
- Formulate backend architecture plans and provide guidance on deployment strategies and reliability improvements.
- Participate in an on-call rotation to ensure 24/7 platform reliability and rapid incident response.
Qualifications
- 5+ years of experience in Platform or Infrastructure Engineering, DevOps, SRE, or similar roles.
- Strong backend development experience using Python, JavaScript/Typescript.
- Solid understanding of API design and implementation.
- Proficiency in SQL.
- Experience with CI/CD tools like Jenkins, GitLab CI, CircleCI.
- Hands-on experience with IaC tools such as Terraform, CloudFormation, Ansible.
- Familiarity with monitoring and observability tools like Datadog, Splunk, New Relic, Prometheus.
- Strong analytical and problem-solving skills with a focus on long-term solutions.
- Excellent communication skills for collaboration with technical and non-technical stakeholders.
- Ability to thrive in a fast-paced environment and manage multiple priorities.
Working Arrangements
This position offers a hybrid work mode, combining remote and in-office work as mutually agreed upon.
Ideal Candidates Will Also Have
- Interest or experience in Machine Learning and Generative AI.
- Exposure to performance, load, and stress testing frameworks.
- Familiarity with security best practices and tools in cloud environments.
Join Us and Enjoy These Perks!
- Competitive salary with bonus
- Medical insurance coverage
- Generous leave and public holidays
- Employee referral bonus program
- Annual learning stipend for professional development
- Complimentary subscription to Masterclass
We are looking for a Senior DevOps engineer with at least 3 years of experience in:
- AWS
- Terraform
- GitHub actions
- CI/CD
- Bash/Linux
- Docker/ECS
Please note that this is a fulltime position and we are a remote-first company.
We follow a Bring Your Own Device(BYOD) model, so make sure you have a laptop with minimum requirements.
https://logiclinklabs.com/careers
at Scoutflo
Scoutflo is a platform that automates complex infrastructure requirements for Kubernetes Infrastructure.
Job Description:
- In-depth knowledge of full-stack development principles and best practices.
- Expertise in building web applications with strong proficiency in languages like
- Node.js, React, and Go.
- Experience developing and consuming RESTful & gRPC API Protocols.
- Familiarity with CI/CD workflows and DevOps processes.
- Solid understanding of cloud platforms and container orchestration
- technologies
- Experience with Kubernetes pipelines and workflows using tools like Argo CD.
- Experience with designing and building user-friendly interfaces.
- Excellent understanding of distributed systems, databases, and APIs.
- A passion for writing clean, maintainable, and well-documented code.
- Strong problem-solving skills and the ability to work independently as well as
- collaboratively.
- Excellent communication and interpersonal skills.
- Experience with building self-serve platforms or user onboarding experiences.
- Familiarity with Infrastructure as Code (IaC) tools like Terraform.
- A strong understanding of security best practices for Kubernetes deployments.
- Grasp on setting up Network Architecture for distributed systems.
Must have:
1) Experience with managing Infrastructure on AWS/GCP or Azure
2) Managed Infrastructure on Kubernetes
Your Opportunity Join our dynamic team as a Full Stack Software Dev, where you'll work at the intersection of innovation and leadership. You'll be part of a passionate group of engineers dedicated to building cutting-edge SaaS solutions that solve real customer challenges. This role is perfect for an experienced engineer who thrives on managing teams, collaborating with leadership, and driving product development. You’ll work directly with the CEO and senior architects, ensuring that our products meet the highest design and performance standards.
Key Responsibilities
- Lead, manage, and mentor a team of engineers to deliver scalable, high-performance solutions.
- Coordinate closely with the CEO and product leadership to align on goals and drive the vision forward.
- Collaborate with distributed teams to design, build, and refine core product features that serve a global audience.
- Stay hands-on with coding and architecture, driving key services and technical initiatives from end to end.
- Troubleshoot, debug, and optimize existing systems to ensure smooth product operations.
- Requirements & Technical Skills
- Bachelor's/Master’s/PhD in Computer Science, Engineering, or related fields (B.Tech, M.Tech, B.CA, B.E./M.E).
- 4 to 8 years of hands-on experience as a software developer, ideally in a SaaS environment.
- Proven track record in developing scalable, distributed systems and services.
- Solid understanding of the Software Development Lifecycle (SDLC).
- Strong programming experience in Spring & Hibernate with Kotlin, React, Nest.js, Python, and Shell scripting.
- Expertise in Unix-based systems, container technologies, and virtual machines.
- Knowledge of both relational and non-relational databases (MySQL, PostgreSQL, MongoDB, DocumentDB). Preferred Qualifications
- Familiarity with Agile methodologies.
- Experience working on both structured and unstructured data sources. Soft Skills
- Strong leadership, coaching, and mentoring capabilities to inspire and guide a team of engineers.
- Excellent communication skills, with the ability to present complex technical concepts clearly to non-technical stakeholders.
- Adaptable to new technologies in a fast-paced environment.
DevOps & Automation:
- Experience in CI/CD tools like Azure DevOps, YAML, Git, and GitHub. Capable of automating build, test, and deployment processes to streamline application delivery.
- Hands-on experience with Infrastructure as Code (IaC) tools such as Bicep (preferred), Terraform, Ansible, and ARM Templates.
Cloud Services & Architecture:
- Experience in Azure Cloud services, including Web Apps, AKS, Application Gateway, APIM, and Logic Apps.
- Good understanding of cloud design patterns, security best practices, and cost optimization strategies.
Scripting & Automation:
- Experience in developing and maintaining automation scripts using PowerShell to manage, monitor, and support applications.
- Familiar with Azure CLI, REST APIs, and automating workflows using Azure DevOps Pipelines.
Data Integration & ADF:
- Working knowledge or basic hands-on experience with Azure Data Factory (ADF), focusing on developing and managing data pipelines and workflows.
- Knowledge of data integration practices, including ETL/ELT processes and data transformations.
Application Management & Monitoring:
- Ability to provide comprehensive support for both new and legacy applications.
- Proficient in managing and monitoring application performance using tools like Azure Monitor, Log Analytics, and Application Insights.
- Understanding of application security principles and best practices.
Database Skills:
- Basic experience of SQL and Azure SQL, including database backups, restores, and application data management.
As a Kafka Administrator at Cargill you will work across the full set of data platform technologies spanning on-prem and SAS solutions empowering highly performant modern data centric solutions. Your work will play a critical role in enabling analytical insights and process efficiencies for Cargill’s diverse and complex business environments. You will work in a small team who shares your passion for building, configuring, and supporting platforms while sharing, learning and growing together.
- Develop and recommend improvements to standard and moderately complex application support processes and procedures.
- Review, analyze and prioritize incoming incident tickets and user requests.
- Perform programming, configuration, testing and deployment of fixes or updates for application version releases.
- Implement security processes to protect data integrity and ensure regulatory compliance.
- Keep an open channel of communication with users and respond to standard and moderately complex application support requests and needs.
MINIMUM QUALIFICATIONS
- 2-4 year of minimum experience
- Knowledge of Kafka cluster management, alerting/monitoring, and performance tuning
- Full ecosystem Kafka administration (kafka, zookeeper, kafka-rest, connect)
- Experience implementing Kerberos security
- Preferred:
- Experience in Linux system administration
- Authentication plugin experience such as basic, SSL, and Kerberos
- Production incident support including root cause analysis
- AWS EC2
- Terraform
The Cloud Engineer will design and develop the capabilities of the company cloud platform and the automation of application deployment pipelines to the cloud platform. In this role, you will be an essential partner and technical specialist for cloud platform development and Operations.
Key Accountabilities
Participate in a dynamic development environment to engineer evolving customer solutions on Azure. Support SAP Application teams for their requirements related to Application and release management. Develop automation capabilities in the cloud platform to enable provisioning and upgrades of cloud services.
Design continuous integration delivery pipelines with infrastructure as code, automation, and testing capabilities to facilitate automated deployment of applications.
Develop testable code to automate Cloud platform capabilities and Cloud platform observability tools. Engineering support to implementation/ POC of new tools and techniques.
Independently handle support of critical SAP Applications Infrastructure deployed on Azure. Other duties as assigned.
Qualifications
Minimum Qualifications
Bachelor’s degree in a related field or equivalent experience
Minimum of 5 years of related work experience
PREFERRED QUALIFICATIONS
Supporting complex application development activities in DevOps environment.
Building and supporting fully automated cloud platform solutions as Infrastructure as Code. Working with cloud services platform primarily on Azure and automating the Cloud infrastructure life cycle with tools such as Terraform, GitHub Actions.
With scripting and programming languages such as Python, Go, PowerShell.
Good knowledge of applying Azure Cloud adoption framework and Implementing Microsoft well architected framework.
Experience with Observability Tools, Cloud Infrastructure security services on Azure, Azure networking topologies and Azure Virtual WAN.
Experience automating Windows and Linux operating system deployments and management in automatically scaling deployments.
Good to have:
1) Managing cloud infra using IAC methods- terraform and go/golang
2) Knowledge about complex enterprise networking- LAN, WAN, VNET, VLAN
3) Good understanding of application architecture- databases, tiered architecture
About the job
MangoApps builds enterprise products that make employees at organizations across the globe
more effective and productive in their day-to-day work. We seek tech pros, great
communicators, collaborators, and efficient team players for this role.
Job Description:
Experience: 5+yrs (Relevant experience as a SRE)
Open positions: 2
Job Responsibilities as a SRE
- Must have very strong experience in Linux (Ubuntu) administration
- Strong in network troubleshooting
- Experienced in handling and diagnosing the root cause of compute and database outages
- Strong experience required with cloud platforms, specifically Azure or GCP (proficiency in at least one is mandatory)
- Must have very strong experience in designing, implementing, and maintaining highly available and scalable systems
- Must have expertise in CloudWatch or similar log systems and troubleshooting using them
- Proficiency in scripting and programming languages such as Python, Go, or Bash is essential
- Familiarity with configuration management tools such as Ansible, Puppet, or Chef is required
- Must possess knowledge of database/SQL optimization and performance tuning.
- Respond promptly to and resolve incidents to minimize downtime
- Implement and manage infrastructure using IaC tools like Terraform, Ansible, or Cloud Formation
- Excellent problem-solving skills with a proactive approach to identifying and resolving issues are essential.
Devops Engineer(Permanent)
Experience: 8 to 12 yrs
Location: Remote for 2-3 months (Any Mastek Location- Chennai/Mumbai/Pune/Noida/Gurgaon/Ahmedabad/Bangalore)
Max Salary = 28 LPA (including 10% variable)
Notice Period: Immediate/ max 10days
Mandatory Skills: Either Splunk/Datadog, Gitlab, Retail Domain
· Bachelor’s degree in Computer Science/Information Technology, or in a related technical field or equivalent technology experience.
· 10+ years’ experience in software development
· 8+ years of experience in DevOps
· Mandatory Skills: Either Splunk/Datadog,Gitalb,EKS,Retail domain experience
· Experience with the following Cloud Native tools: Git, Jenkins, Grafana, Prometheus, Ansible, Artifactory, Vault, Splunk, Consul, Terraform, Kubernetes
· Working knowledge of Containers, i.e., Docker Kubernetes, ideally with experience transitioning an organization through its adoption
· Demonstrable experience with configuration, orchestration, and automation tools such as Jenkins, Puppet, Ansible, Maven, and Ant to provide full stack integration
· Strong working knowledge of enterprise platforms, tools and principles including Web Services, Load Balancers, Shell Scripting, Authentication, IT Security, and Performance Tuning
· Demonstrated understanding of system resiliency, redundancy, failovers, and disaster recovery
· Experience working with a variety of vendor APIs including cloud, physical and logical infrastructure devices
· Strong working knowledge of Cloud offerings & Cloud DevOps Services (EC2, ECS, IAM, Lambda, Cloud services, AWS CodeBuild, CodeDeploy, Code Pipeline etc or Azure DevOps, API management, PaaS)
· Experience managing and deploying Infrastructure as Code, using tools like Terraform Helm charts etc.
· Manage and maintain standards for Devops tools used by the team
We have an exciting and rewarding opportunity for you to take your software engineering career to the next level.
As a Software Engineer III at JPMorgan Chase within the Asset & Wealth Management, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.
Job responsibilities
- Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
- Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems
- Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development
- Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems
- Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture
- Contributes to software engineering communities of practice and events that explore new and emerging technologies
- Adds to team culture of diversity, equity, inclusion, and respect
Required qualifications, capabilities, and skills
- Formal training or certification on software engineering concepts and 3+ years applied experience
- Expert level in the programming on Python. Experience designing and building APIs using popular frameworks such as Flask, Fast API
- Familiar with site reliability concepts, principles, and practices
- Experience maintaining a Cloud-base infrastructure
- Familiar with observability such as white and black box monitoring, service level objective alerting, and telemetry collection using tools such as Grafana, Dynatrace, Prometheus, Datadog, Splunk, and others
- Emerging knowledge of software, applications and technical processes within a given technical discipline (e.g., Cloud, artificial intelligence, Android, etc.)
- Emerging knowledge of continuous integration and continuous delivery tools (e.g., Jenkins, Jules, Spinnaker, BitBucket, GitLab, Terraform, etc.)
- Emerging knowledge of common networking technologies
Preferred qualifications, capabilities, and skills
- General knowledge of financial services industry
- Experience working on public cloud environment using wrappers and practices that are in use at JPMC
- Knowledge on Terraform, containers and container orchestration, especially Kubernetes preferred
at PortOne
PortOne is re−imagining payments in Korea and other international markets. We are a Series B funded startup backed by prominent VC firms Softbank and Hanwa Capital
PortOne provides a unified API for merchants to integrate with and manage all of the payment options available in Korea and SEA Markets - Thailand, Singapore, Indonesia etc. It's currently used by 2000+ companies and processing multi-billion dollars in annualized volume. We are building a team to take this product to international markets, and looking for engineers with a passion for fintech and digital payments.
Culture and Values at PortOne
- You will be joining a team that stands for Making a difference.
- You will be joining a culture that identifies more with Sports Teams rather than a 9 to 5 workplace.
- This will be remote role that allows you flexibility to save time on commute
- Your will have peers who are/have
- Highly Self Driven with A sense of purpose
- High Energy Levels - Building stuff is your sport
- Ownership - Solve customer problems end to end - Customer is your Boss
- Hunger to learn - Highly motivated to keep developing new tech skill sets
Who you are ?
* You are an athlete and Devops/DevSecOps is your sport.
* Your passion drives you to learn and build stuff and not because your manager tells you to.
* Your work ethic is that of an athlete preparing for your next marathon. Your sport drives you and you like being in the zone.
* You are NOT a clockwatcher renting out your time, and NOT have an attitude of "I will do only what is asked for"
* Enjoys solving problems and delight users both internally and externally
* Take pride in working on projects to successful completion involving a wide variety of technologies and systems
* Posses strong & effective communication skills and the ability to present complex ideas in a clear & concise way
* Responsible, self-directed, forward thinker, and operates with focus, discipline and minimal supervision
* A team player with a strong work ethic
Experience
* 2+ year of experience working as a Devops/DevSecOps Engineer
* BE in Computer Science or equivalent combination of technical education and work experience
* Must have actively managed infrastructure components & devops for high quality and high scale products
* Proficient knowledge and experience on infra concepts - Networking/Load Balancing/High Availability
* Experience on designing and configuring infra in cloud service providers - AWS / GCP / AZURE
* Knowledge on Secure Infrastructure practices and designs
* Experience with DevOps, DevSecOps, Release Engineering, and Automation
* Experience with Agile development incorporating TDD / CI / CD practices
Hands on Skills
* Proficient in atleast one high level Programming Language: Go / Java / C
* Proficient in scripting - bash scripting etc - to build/glue together devops/datapipeline workflows
* Proficient in Cloud Services - AWS / GCP / AZURE
* Hands on experience on CI/CD & relevant tools - Jenkins / Travis / Gitops / SonarQube / JUnit / Mock frameworks
* Hands on experience on Kubenetes ecosystem & container based deployments - Kubernetes / Docker / Helm Charts / Vault / Packer / lstio / Flyway
* Hands on experience on Infra as code frameworks - Terraform / Crossplane / Ansible
* Version Control & Code Quality: Git / Github / Bitbucket / SonarQube
* Experience on Monitoring Tools: Elasticsearch / Logstash / Kibana / Prometheus / Grafana / Datadog / Nagios
* Experience with RDBMS Databases & Caching services: Postgres / MySql / Redis / CDN
* Experience with Data Pipelines/Worflow tools: Airflow / Kafka / Flink / Pub-Sub
* DevSecOps - Cloud Security Assessment, Best Practices & Automation
* DevSecOps - Vulnerabiltiy Assessments/Penetration Testing for Web, Network and Mobile applications
* Preferrable to have Devops/Infra Experience for products in Payments/Fintech domain - Payment Gateways/Bank integrations etc
What will you do ?
Devops
* Provisioning the infrastructure using Crossplane/Terraform/Cloudformation scripts.
* Creating and Managing the AWS EC2, RDS, EKS, S3, VPC, KMS and IAM services, EKS clusters & RDS Databases.
* Monitor the infra to prevent outages/downtimes and honor our infra SLAs
* Deploy and manage new infra components.
* Update and Migrate the clusters and services.
* Reducing the cloud cost by enabling/scheduling for less utilized instances.
* Collaborate with stakeholders across the organization such as experts in - product, design, engineering
* Uphold best practices in Devops/DevSecOps and Infra management with attention to security best practices
DevSecOps
* Cloud Security Assessment & Automation
* Modify existing infra to adhere to security best practices
* Perform Threat Modelling of Web/Mobile applications
* Integrate security testing tools (SAST, DAST) in to CI/CD pipelines
* Incident management and remediation - Monitoring security incidents, recovery from and remediation of the issues
* Perform frequent Vulnerabiltiy Assessments/Penetration Testing for Web, Network and Mobile applications
* Ensure the environment is compliant to CIS, NIST, PCI etc.
Here are examples of apps/features you will be supporting as a Devops/DevSecOps Engineer
* Intuitive, easy-to-use APIs for payment process.
* Integrations with local payment gateways in international markets.
* Dashboard to manage gateways and transactions.
* Analytics platform to provide insights
- Dynatrace Expertise: Lead the implementation, configuration, and optimization of Dynatrace monitoring solutions across diverse environments, ensuring maximum efficiency and effectiveness.
- Cloud Integration: Utilize expertise in AWS and Azure to seamlessly integrate Dynatrace monitoring into cloud-based architectures, leveraging PaaS services and IAM roles for efficient monitoring and management.
- Application and Infrastructure Architecture: Design and architect both application and infrastructure landscapes, considering factors like Oracle, SQL Server, Shareplex, Commvault, Windows, Linux, Solaris, SNMP polling, and SNMP traps.
- Cross-Platform Integration: Integrate Dynatrace with various products such as Splunk, APIM, and VMWare to provide comprehensive monitoring and analysis capabilities.
- Inter-Account Integration: Develop and implement integration strategies for seamless communication and monitoring across multiple AWS accounts, leveraging Terraform and IAM roles.
- Experience working with On-premise Application and Infrastructure
- Experience with AWS & Azure and Cloud Certified.
- Dynatrace Experience & Certification
DevOps Lead Engineer
We are seeking a skilled DevOps Lead Engineer with 8 to 10 yrs. of experience who handles the entire DevOps lifecycle and is accountable for the implementation of the process. A DevOps Lead Engineer is liable for automating all the manual tasks for developing and deploying code and data to implement continuous deployment and continuous integration frameworks. They are also held responsible for maintaining high availability of production and non-production work environments.
Essential Requirements (must have):
• Bachelor's degree preferable in Engineering.
• Solid 5+ experience with AWS, DevOps, and related technologies
Skills Required:
Cloud Performance Engineering
• Performance scaling in a Micro-Services environment
• Horizontal scaling architecture
• Containerization (such as Dockers) & Deployment
• Container Orchestration (such as Kubernetes) & Scaling
DevOps Automation
• End to end release automation.
• Solid Experience in DevOps tools like GIT, Jenkins, Docker, Kubernetes, Terraform, Ansible, CFN etc.
• Solid experience in Infra Automation (Infrastructure as Code), Deployment, and Implementation.
• Candidates must possess experience in using Linux, Jenkins, and ample experience in Configuring and automating the monitoring tools.
• Strong scripting knowledge
• Strong analytical and problem-solving skills.
• Cloud and On-prem deployments
Infrastructure Design & Provisioning
• Infra provisioning.
• Infrastructure Sizing
• Infra Cost Optimization
• Infra security
• Infra monitoring & site reliability.
Job Responsibilities:
• Responsible for creating software deployment strategies that are essential for the successful
deployment of software in the work environment and provide stable environment for delivery of
quality.
• The DevOps Lead Engineer is accountable for designing, building, configuring, and optimizing
automation systems that help to execute business web and data infrastructure platforms.
• The DevOps Lead Engineer is involved in creating technology infrastructure, automation tools,
and maintaining configuration management.
• The Lead DevOps Engineer oversees and leads the activities of the DevOps team. They are
accountable for conducting training sessions for the juniors in the team, mentoring, career
support. They are also answerable for the architecture and technical leadership of the complete
DevOps infrastructure.
Our organization relies on its central engineering workforce to develop and maintain a product portfolio of several different startups. As part of our engineering, you'll work on several products every quarter. Our product portfolio continuously grows as we incubate more startups, which means that different products are very likely to make use of different technologies, architecture & frameworks - a fun place for smart tech lovers!
Responsibilities:
- 3 - 5 years of experience working in DevOps/DevSecOps.
- Strong hands-on experience in deployment, administration, monitoring, and hosting services of Kubernetes.
- Ensure compliance with security policies and best practices for RBAC (Kubernetes) and AWS services like EKS, VPC, IAM, EC2, Route53, and S3.
- Implement and maintain CI/CD pipelines for continuous integration and deployment.
- Monitor and troubleshoot issues related to infrastructure and application security.
- Develop and maintain automation scripts using Terraform for infrastructure as code.
- Hands-on experience in configuring and managing a service mesh like Istio.
- Experience working in Cloud, Agile, CI/CD, and DevOps environments. We live in the Cloud.
- Experience with Jenkins, Google Cloud Build, or similar.
- Good to have experience with using PAAS and SAAS services from AWS like Big Query, Cloud Storage, S3, etc.,
- Good to have experience with configuring, scaling, and monitoring database systems like PostgreSQL, MySQL, MongoDB, and so on.
Requirements:
- Bachelor's degree in Computer Science, Information Technology, or a related field.
- 3 to 7 years of experience in DevSecOps or a related field.
- Great hands-on experience in Kubernetes, EKS, AWS services like VPC, IAM, EC2, and S3, Terraform, terraform backed and ternary resources of Terraform, and CICD tools like Jenkins.
- Strong understanding of DevOps principles and practices.
- Experience in developing and maintaining automation scripts using Terraform.
- Proficiency in any Linux scripting language.
- Familiarity with security tools and technologies such as WAF, IDS/IPS, and SIEM.
- Excellent problem-solving skills and ability to work independently and within a team.
- Good communication skills and ability to work collaboratively with cross-functional teams.
- Availability to join immediately to 15 days in Pune, Bangalore, or Noida locations.
About CAW Studios:
CAW Studios is a Product Engineering Studio. WE BUILD TRUE PRODUCT TEAMS for our clients. Each team is a small, well-balanced group of geeks and a product manager who together produce relevant and high-quality products. We believe the product development process needs to be fixed as most development companies operate as IT Services. Unlike IT Apps, product development requires Ownership, Creativity, Agility, and design that scales.
Know More About CAW Studios:
Website: https://www.cawstudios.com/">https://www.cawstudios.com/
Benefits: Startup culture, powerful Laptop, free snacks, cool office, flexible work hours and work-from-home policy, medical insurance, and most importantly - the opportunity to work on challenging cutting-edge problems.
at CodeCraft Technologies Private Limited
Position: Senior Backend Developer (NodeJS)
Experience: 5+ Years
Location: Bengaluru
CodeCraft Technologies multi-award-winning creative engineering company offering design and technology solutions on mobile, web and cloud platforms.
We are looking for an enthusiastic and self-driven Backend Engineer to join our team.
Roles and Responsibilities:
● Develop high-quality software design and architecture
● Identify, prioritize and execute tasks in the software development life cycle.
● Develop tools and applications by producing clean, efficient code
● Automate tasks through appropriate tools and scripting
● Review and debug code
● Perform validation and verification testing
● Collaborate with cross-functional teams to fix and improve products
● Document development phases and monitor systems
● Ensure software is up-to-date with the latest technologies
Desired Profile:
● NodeJS [Typescript]
● MongoDB [NoSQL DB]
● MySQL, PostgreSQL
● AWS - S3, Lambda, API Gateway, Cloud Watch, ECR, ECS, Fargate, SQS / SNS
● Terraform, Kubernetes, Docker
● Good Understanding of Serverless Architecture
● Proven experience as a Senior Software Engineer
● Extensive experience in software development, scripting and project management
● Experience using system monitoring tools (e.g. New Relic) and automated testing frameworks
● Familiarity with various operating systems (Linux, Mac OS, Windows)
● Analytical mind with problem-solving aptitude
● Ability to work independently
Good to Have:
● Actively contribute to relevant open-source projects, demonstrating a commitment to community collaboration and continuous learning.
● Share knowledge and insights gained from open-source contributions with the development team
● AWS Solutions Architect Professional Certification
● AWS DevOps Professional Certification
● Multi-Cloud/ hybrid cloud experience
● Experience in building CI/CD pipelines using AWS services
Requirements
Core skills:
● Strong background in Linux / Unix Administration and
troubleshooting
● Experience with AWS (ideally including some of the following:
VPC, Lambda, EC2, Elastic Cache, Route53, SNS, Cloudwatch,
Cloudfront, Redshift, Open search, ELK etc.)
● Experience with Infra Automation and Orchestration tools
including Terraform, Packer, Helm, Ansible.
● Hands on Experience on container technologies like Docker,
Kubernetes/EKS, Gitlab and Jenkins as Pipeline.
● Experience in one or more of Groovy, Perl, Python, Go or
scripting experience in Shell.
● Good understanding of with Continuous Integration(CI) and
Continuous Deployment(CD) pipelines using tools like Jenkins,
FlexCD, ArgoCD, Spinnaker etc
● Working knowledge of key value stores, database technologies
(SQL and NoSQL), Mongo, mySQL
● Experience with application monitoring tools like Prometheus,
Grafana, APM tools like NewRelic, Datadog, Pinpoint
● Good exposure on middleware components like ELK, Redis, Kafka
and IOT based systems including Redis, NewRelic, Akamai,
Apache / Nginx, ELK, Grafana, Prometheus etc
Good to have:
● Prior experience in Logistics, Payment and IOT based applications
● Experience in unmanaged mongoDB cluster, automations &
operations, analytics
● Write procedures for backup and disaster recovery
Core Experience
● 3-5 years of hands-on DevOps experience
● 2+ years of hands-on Kubernetes experience
● 3+ years of Cloud Platform experience with special focus on
Lambda, R53, SNS, Cloudfront, Cloudwatch, Elastic Beanstalk,
RDS, Open Search, EC2, Security tools
● 2+ years of scripting experience in Python/Go, shell
● 2+ years of familiarity with CI/CD, Git, IaC, Monitoring, and
Logging tools
About Kiru:
Kiru is a forward-thinking payments startup on a mission to revolutionise the digital payments landscape in Africa and beyond. Our innovative solutions will reshape how people transact, making payments safer, faster, and more accessible. Join us on our journey to redefine the future of payments.
Position Overview:
We are searching for a highly skilled and motivated DevOps Engineer to join our dynamic team in Pune, India. As a DevOps Engineer at Kiru, you will play a critical role in ensuring our payment infrastructure's reliability, scalability, and security.
Key Responsibilities:
- Utilize your expertise in technology infrastructure configuration to manage and automate infrastructure effectively.
- Collaborate with cross-functional teams, including Software Developers and technology management, to design and implement robust and efficient DevOps solutions.
- Configure and maintain a secure backend environment focusing on network isolation and VPN access.
- Implement and manage monitoring solutions like ZipKin, Jaeger, New Relic, or DataDog and visualisation and alerting solutions like Prometheus and Grafana.
- Work closely with developers to instrument code for visualisation and alerts, ensuring system performance and stability.
- Contribute to the continuous improvement of development and deployment pipelines.
- Collaborate on the selection and implementation of appropriate DevOps tools and technologies.
- Troubleshoot and resolve infrastructure and deployment issues promptly to minimize downtime.
- Stay up-to-date with emerging DevOps trends and best practices.
- Create and maintain comprehensive documentation related to DevOps processes and configurations.
Qualifications:
- Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience).
- Proven experience as a DevOps Engineer or in a similar role.
- Experience configuring infrastructure on Microsoft Azure
- Experience with Kubernetes as a container orchestration technology
- Experience with Terraform and Azure ARM or Bicep templates for infrastructure provisioning and management.
- Experience configuring and maintaining secure backend environments, including network isolation and VPN access.
- Proficiency in setting up and managing monitoring and visualization tools such as ZipKin, Jaeger, New Relic, DataDog, Prometheus, and Grafana.
- Ability to collaborate effectively with developers to instrument code for visualization and alerts.
- Strong problem-solving and troubleshooting skills.
- Excellent communication and teamwork skills.
- A proactive and self-motivated approach to work.
Desired Skills:
- Experience with Azure Kubernetes Services and managing identities across Azure services.
- Previous experience in a financial or payment systems environment.
About Kiru:
At Kiru, we believe that success is achieved through collaboration. We recognise that every team member has a vital role to play, and it's the partnerships we build within our organisation that drive our customers' success and our growth as a business.
We are more than just a team; we are a close-knit partnership. By bringing together diverse talents and fostering powerful collaborations, we innovate, share knowledge, and continually learn from one another. We take pride in our daily achievements but never stop challenging ourselves and supporting each other. Together, we reach new heights and envision a brighter future.
Regardless of your career journey, we provide the guidance and resources you need to thrive. You will have everything required to excel through training programs, mentorship, and ongoing support. At Kiru, your success is our success, and that success matters because we are the essential partners for the world's most critical businesses. These companies manufacture, transport, and supply the world's essential goods.
Equal Opportunities and Accommodations Statement:
Kiru is committed to fostering a workplace and global community where inclusion is celebrated and where you can bring your authentic selfbecause that's who we're interested in. If you are interested in this role but don't meet every qualification in the job description, don't hesitate to apply. We are an equal opportunity employer.
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Our Mission
Our mission is to help customers achieve their business objectives by providing innovative, best-in-class consulting, IT solutions and services and to make it a joy for all stakeholders to work with us. We function as a full stakeholder in business, offering a consulting-led approach with an integrated portfolio of technology-led solutions that encompass the entire Enterprise value chain.
Our Customer-centric Engagement Model defines how we engage with you, offering specialized services and solutions that meet the distinct needs of your business.
Our Culture
Culture forms the core of our foundation and our effort towards creating an engaging workplace has resulted in Infra360 Solution Pvt Ltd.
Our Tech-Stack:
- Azure DevOps, Azure Kubernetes Service, Docker, Active Directory (Microsoft Entra)
- Azure IAM and managed identity, Virtual network, VM Scale Set, App Service, Cosmos
- Azure, MySQL Scripting (PowerShell, Python, Bash),
- Azure Security, Security Documentation, Security Compliance,
- AKS, Blob Storage, Azure functions, Virtual Machines, Azure SQL
- AWS - IAM, EC2, EKS, Lambda, ECS, Route53, Cloud formation, Cloud front, S3
- GCP - GKE, Compute Engine, App Engine, SCC
- Kubernetes, Linux, Docker & Microservices Architecture
- Terraform & Terragrunt
- Jenkins & Argocd
- Ansible, Vault, Vagrant, SaltStack
- CloudFront, Apache, Nginx, Varnish, Akamai
- Mysql, Aurora, Postgres, AWS RedShift, MongoDB
- ElasticSearch, Redis, Aerospike, Memcache, Solr
- ELK, Fluentd, Elastic APM & Prometheus Grafana Stack
- Java (Spring/Hibernate/JPA/REST), Nodejs, Ruby, Rails, Erlang, Python
What does this role hold for you…??
- Infrastructure as a code (IaC)
- CI/CD and configuration management
- Managing Azure Active Directory (Entra)
- Keeping the cost of the infrastructure to the minimum
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environments infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Setting up the right set of security measures
Requirements
Apply if you have…
- A graduation/post-graduation degree in Computer Science and related fields
- 2-4 years of strong DevOps experience in Azure with the Linux environment.
- Strong interest in working in our tech stack
- Excellent communication skills
- Worked with minimal supervision and love to work as a self-starter
- Hands-on experience with at least one of the scripting languages - Bash, Python, Go etc
- Experience with version control systems like Git
- Understanding of Azure cloud computing services and cloud computing delivery models (IaaS, PaaS, and SaaS)
- Strong scripting or programming skills for automating tasks (PowerShell/Bash)
- Knowledge and experience with CI/CD tools: Azure DevOps, Jenkins, Gitlab etc.
- Knowledge and experience in IaC at least one (ARM Templates/ Terraform)
- Strong experience with managing the Production Systems day in and day out
- Experience in finding issues in different layers of architecture in a production environment and fixing them
- Experience in automation tools like Ansible/SaltStack and Jenkins
- Experience in Docker/Kubernetes platform and managing OpenStack (desirable)
- Experience with Hashicorp tools i.e. Vault, Vagrant, Terraform, Consul, VirtualBox etc. (desirable)
- Experience in Monitoring tools like Prometheus/Grafana/Elastic APM.
- Experience in logging tools Like ELK/Loki.
- Experience in using Microsoft Azure Cloud services
If you are passionate about infrastructure, and cloud technologies, and want to contribute to innovative projects, we encourage you to apply. Infra360 offers a dynamic work environment and opportunities for professional growth.
Interview Process
Application Screening=>Test/Assessment=>2 Rounds of Tech Interview=>CEO Round=>Final Discussion
Aprajita Consultancy
Role: Oracle DBA Developer
Location: Hyderabad
Required Experience: 8 + Years
Skills : DBA, Terraform, Ansible, Python, Shell Script, DevOps activities, Oracle DBA, SQL server, Cassandra, Oracle sql/plsql, MySQL/Oracle/MSSql/Mongo/Cassandra, Security measure configuration
Roles and Responsibilities:
1. 8+ years of hands-on DBA experience in one or many of the following: SQL Server, Oracle, Cassandra
2. DBA experience in a SRE environment will be an advantage.
3. Experience in Automation/building databases by providing self-service tools. analyze and implement solutions for database administration (e.g., backups, performance tuning, Troubleshooting, Capacity planning)
4. Analyze solutions and implement best practices for cloud database and their components.
5. Build and enhance tooling, automation, and CI/CD workflows (Jenkins etc.) that provide safe self-service capabilities to th6. Implement proactive monitoring and alerting to detect issues before they impact users. Use a metrics-driven approach to identify and root cause performance and scalability bottlenecks in the system.
7. Work on automation of database infrastructure and help engineering succeed by providing self-service tools.
8. Write database documentation, including data standards, procedures, and definitions for the data dictionary (metadata)
9. Monitor database performance, control access permissions and privileges, capacity planning, implement changes and apply new patches and versions when required.
10. Recommend query and schema changes to optimize the performance of database queries.
11. Have experience with cloud-based environments (OCI, AWS, Azure) as well as On-Premises.
12. Have experience with cloud database such as SQL server, Oracle, Cassandra
13. Have experience with infrastructure automation and configuration management (Jira, Confluence, Ansible, Gitlab, Terraform)
14. Have excellent written and verbal English communication skills.
15. Planning, managing, and scaling of data stores to ensure a business’ complex data requirements are met and it can easily access its data in a fast, reliable, and safe manner.
16. Ensures the quality of orchestration and integration of tools needed to support daily operations by patching together existing infrastructure with cloud solutions and additional data infrastructures.
17. Data Security and protecting the data through rigorous testing of backup and recovery processes and frequently auditing well-regulated security procedures.
18. use software and tooling to automate manual tasks and enable engineers to move fast without the concern of losing data during their experiments.
19. service level objectives (SLOs), risk analysis to determine which problems to address and which problems to automate.
20. Bachelor's Degree in a technical discipline required.
21. DBA Certifications required: Oracle, SQLServer, Cassandra (2 or more)
21. Cloud, DevOps certifications will be an advantage.
Must have Skills:
Ø Oracle DBA with development
Ø SQL
Ø Devops tools
Ø Cassandra
Role : Senior Engineer Infrastructure
Key Responsibilities:
● Infrastructure Development and Management: Design, implement, and manage robust and scalable infrastructure solutions, ensuring optimal performance,security, and availability. Lead transition and migration projects, moving legacy systemsto cloud-based solutions.
● Develop and maintain applications and services using Golang.
● Automation and Optimization: Implement automation tools and frameworksto optimize operational processes. Monitorsystem performance, optimizing and modifying systems as necessary.
● Security and Compliance: Ensure infrastructure security by implementing industry best practices and compliance requirements. Respond to and mitigate security incidents and vulnerabilities.
Qualifications:
● Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent practical experience).
● Good understanding of prominent backend languageslike Golang, Python, Node.js, or others.
● In-depth knowledge of network architecture,system security, infrastructure scalability.
● Proficiency with development tools,server management, and database systems.
● Strong experience with cloud services(AWS.), deployment,scaling, and management.
● Knowledge of Azure is a plus
● Familiarity with containers and orchestration services,such as Docker, Kubernetes, etc.
● Strong problem-solving skills and analytical thinking.
● Excellent verbal and written communication skills.
● Ability to thrive in a collaborative team environment.
● Genuine passion for backend development and keen interest in scalable systems.
Job Description
Role Overview:
We're looking for a passionate DevOps engineer with a minimum of 10 years’ experience across all levels, who will work closely with the development teams in Agile setup to continuously improve, support, secure, and operate our production and test environments. We believe in automating our infrastructure as much as possible and pursuing challenging problems in a sustainable and repeatable way.
Our Toolchain
- Ansible, Docker, Kubernetes, Terraform, Gitlab, Jenkins, Fastlane, New Relic, Datadog, SonarQube, IaC
- Apache, Nginx, Linux, Ubuntu, Microservices, Python, Shell, Bash, Helm
- Selenium, Jmeter, Slack, Jira, SAST, OSSEC, OWASP
- Node.JS, PHP, Golang, MySQL, MongoDB, Firebase, Redis, Elastic search,
- VPC, API Gateway, Cognito, DocumentDB, ECS, Lambda, Route53, ACM, S3, EC2, IAM
You'll need:
- Production experience with distributed/scalable systems consisting of multiple microservices and/or high-traffic web applications
- Experience with configuration management systems such as Ansible, Chef, Puppet
- Extensive knowledge of the Linux operating system
- Troubleshooting skills that range from diagnosis to solution for Dev team issues
- Knowledge of how the web works and HTTP fundamentals
- Knowledge of IP networking, DNS, load balancing, and firewalling
Bonus points, if you have:
- Experience in agile development and delivery process.
- Good knowledge of at least one programming language. TecStub uses e.g. Nodes, PHP
- Experience in containerizing applications and deployment to production (Docker, Kubernetes)
- Experience in building modern Terraform infrastructures in cloud environments (AWS, GCP, etc...)
- Experience in analysis of application and database performance monitoring tools (Newrelic, datalog, cluster control, etc..)
- Experience with SQL databases like MySQL, NoSQL, Realtime database stores like Redis, or anything in between.
- Experience being part of the engineering team that built the platform.
- Knowledge of good security practices, including network security, system hardening, secure software, and compliance.
- Familiarity with automated build pipeline / continuous integration using Gitlab and Jenkins and Kubernetes/Docker with this setup, we're deploying to production 2 times per day!
Interview Process:
The entire interview process would take approximately 10 Days.
- HR Screening Call (15 minutes)
- Technical Interview Round Level 1 (30 Minutes)
- Technical Interview Round Level 2 (60 minutes)
- Final Interview Round (60 minutes)
- Offer
About Tecstub:
Tecstub is a renowned global provider of comprehensive digital commerce solutions for some of the world's largest enterprises. With offices in North America and Asia-Pacific, our team offers end-to-end solutions such as strategic Solution Consulting, eCommerce website and application development, and support & maintenance services that are tailored to meet our clients' unique business goals. We are dedicated to delivering excellence by working as an extended partner, providing next-generation solutions that are sustainable, scalable, and future-proof. Our passionate and driven team of professionals has over a decade of experience in the industry and is committed to helping our clients stay ahead of the competition.
We value our employees and strive to create a positive work environment that promotes work-life balance and personal growth. As part of our commitment to our team, we offer a range of benefits to ensure our employees are supported and motivated.
- A 5-day work week that promotes work-life balance and allows our employees to take care of personal responsibilities while excelling in their professional roles.
- 30 annual paid leaves that can be utilized for various personal reasons, such as regional holidays, sick leaves, or any other personal needs. We believe that taking time off is essential for overall well-being and productivity.
- Additional special leaves for birthdays, maternity and paternity events to ensure that our employees can prioritize their personal milestones without any added stress.
- Health insurance coverage of 3 lakhs sum insured for our employees, spouse, and children, to provide peace of mind and security for their health needs.
- Vouchers and gifts for important life events such as birthdays and anniversaries, to celebrate our employees' milestones and show appreciation for their contributions to the company.
- A dedicated learning and growth budget for courses and certifications, to support our employees' career aspirations and encourage professional development.
- Company outings to celebrate our successes together and promote a sense of camaraderie among our team members. We believe that celebrating achievements is an important part of building a positive work culture.
Skills
AWS, Terraform, KUBERNETES, GITHUB, APACHE, BASH, DOCKER, ANSIBLE, GIT, Microservices, UBUNTU, GITLAB, CI/CD, APACHE SERVER, NGINX, NODEJS
We are looking for an experienced Sr.Devops Consultant Engineer to join our team. The ideal candidate should have at least 5+ years of experience.
We are retained by a promising startup located in Silicon valley backed by Fortune 50 firm with veterans from firms as Zscaler, Salesforce & Oracle. Founding team has been part of three unicorns and two successful IPO’s in the past and well funded by Dell Technologies and Westwave Capital. The company has been widely recognized as an industry innovator in the Data Privacy, Security space and being built by proven Cybersecurity executives who have successfully built and scaled high growth Security companies and built Privacy programs as executives.
Responsibilities:
- Develop and maintain infrastructure as code using tools like Terraform, CloudFormation, and Ansible
- Manage and maintain Kubernetes clusters on EKS and EC2 instances
- Implement and maintain automated CI/CD pipelines for microservices
- Optimize AWS costs by identifying cost-saving opportunities and implementing cost-effective solutions
- Implement best security practices for microservices, including vulnerability assessments, SOC2 compliance, and network security
- Monitor the performance and availability of our cloud infrastructure using observability tools such as Prometheus, Grafana, and Elasticsearch
- Implement backup and disaster recovery solutions for our microservices and databases
- Stay up to date with the latest AWS services and technologies and provide recommendations for improving our cloud infrastructure
- Collaborate with cross-functional teams, including developers, and product managers, to ensure the smooth operation of our cloud infrastructure
- Experience with large scale system design and scaling services is highly desirable
Requirements:
- Bachelor's degree in Computer Science, Engineering, or a related field
- At least 5 years of experience in AWS DevOps and infrastructure engineering
- Expertise in Kubernetes management, Docker, EKS, EC2, Queues, Python Threads, Celery Optimization, Load balancers, AWS cost optimizations, Elasticsearch, Container management, and observability best practices
- Experience with SOC2 compliance and vulnerability assessment best practices for microservices
- Familiarity with AWS services such as S3, RDS, Lambda, and CloudFront
- Strong scripting skills in languages like Python, Bash, and Go
- Excellent communication skills and the ability to work in a collaborative team environment
- Experience with agile development methodologies and DevOps practices
- AWS certification (e.g. AWS Certified DevOps Engineer, AWS Certified Solutions Architect) is a plus.
Notice period : Can join within a month
We're Hiring: DevOps Tech Lead with 7-9 Years of Experience! 🚀
Are you a seasoned DevOps professional with a passion for cloud technologies and automation? We have an exciting opportunity for a DevOps Tech Lead to join our dynamic team at our Gurgaon office.
🏢 ZoomOps Technolgy Solutions Private Limited
📍 Location: Gurgaon
💼 Full-time position
🔧 Key Skills & Requirements:
✔ 7-9 years of hands-on experience in DevOps roles
✔ Proficiency in Cloud Platforms like AWS, GCP, and Azure
✔ Strong background in Solution Architecture
✔ Expertise in writing Automation Scripts using Python and Bash
✔ Ability to manage IAC tools and CM tools like Terraform, Ansible, pulumi etc..
Responsibilities:
🔹 Lead and mentor the DevOps team, driving innovation and best practices
🔹 Design and implement robust CI/CD pipelines for seamless software delivery
🔹 Architect and optimize cloud infrastructure for scalability and efficiency
🔹 Automate manual processes to enhance system reliability and performance
🔹 Collaborate with cross-functional teams to drive continuous improvement
Join us to work on exciting projects and make a significant impact in the tech space!
Apply now and take the next step in your DevOps career!
Now, more than ever, the Toast team is committed to our customers. We’re taking steps to help restaurants navigate these unprecedented times with technology, resources, and community. Our focus is on building a restaurant platform that helps restaurants adapt, take control, and get back to what they do best: building the businesses they love. And because our technology is purpose-built for restaurants by restaurant people, restaurants can trust that we’ll deliver on their needs for today while investing in experiences that will power their restaurant of the future.
At Toast, our Site Reliability Engineers (SREs) are responsible for keeping all customer-facing services and other Toast production systems running smoothly. SREs are a blend of pragmatic operators and software craftspeople who apply sound software engineering principles, operational discipline, and mature automation to our environments and our codebase. Our decisions are based on instrumentation and continuous observability, as well as predictions and capacity planning.
About this roll* (Responsibilities)
- Gather and analyze metrics from both operating systems and applications to assist in performance tuning and fault finding
- Partner with development teams to improve services through rigorous testing and release procedures
- Participate in system design consulting, platform management, and capacity planning
- Create sustainable systems and services through automation and uplift
- Balance feature development speed and reliability with well-defined service level objectives
Troubleshooting and Supporting Escalations:
- Gather and analyze metrics from both operating systems and applications to assist in performance tuning and fault finding
- Diagnose performance bottlenecks and implement optimizations across infrastructure, databases, web, and mobile applications
- Implement strategies to increase system reliability and performance through on-call rotation and process optimization
- Perform and run blameless RCAs on incidents and outages aggressively, looking for answers that will prevent the incident from ever happening again
Do you have the right ingredients? (Requirements)
- Extensive industry experience with at least 7+ years in SRE and/or DevOps roles
- Polyglot technologist/generalist with a thirst for learning
- Deep understanding of cloud and microservice architecture and the JVM
- Experience with tools such as APM, Terraform, Ansible, GitHub, Jenkins, and Docker
- Experience developing software or software projects in at least four languages, ideally including two of Go, Python, and Java
- Experience with cloud computing technologies ( AWS cloud provider preferred)
Bread puns are encouraged but not required
We are hiring for
DevSecOps Engineer
3-5 years | On-Site , Bangalore.
Responsibilities
● Experience with fully automating CI/CD pipelines end-to-end
● Knowledge of SAST, DAST, SCA, OSA, KICS Scanning
● Serve as a Subject Matter Expert (SME) in DevSecOps and CI/CD best practices and contribute as a member of the technical solutions team
● Perform DevSecOps security activities such as Vulnerability Scanning,Certificate Management, Password Policy Management, Data Analysis of security monitoring outputs, coordination of Remediation Patching, and other daily Security and Compliance efforts
● Experience with OWASP Testing Guide v3 / 4 and OWASP TOP 10
● Experience tools like Sonar Qube , JIRA, to track tickets and requests/incidents
Requirement
● Experience in Web and/or Mobile applications and common vulnerabilities
● Build and configure delivery environments supporting CD/CI tools using an Agile delivery methodology
● Working closely with our development team to create an automated
continuous integration (CI) and continuous delivery (CD) system.
● At least 3-5 years of experience building and maintaining AWS / GCP
infrastructure (VPC, EC2, Security Groups, IAM, ECS, CodeDeploy,)
● Strong understanding of how to secure AWS / GCP environments and meet compliance requirements
● Experience in Ansible, Terraform, Kubernetes, Docker, Helm, Cloud formation template, Jenkins and web server deployment and configuration
● Solid foundation of networking and Linux administration
● Ability to learn/use a wide variety of open source technologies and tools
● Background in FinTechs / Banks preferred.
Why Work at Open?
● You will be part of the early tribe that is changing the way business banking rolls.
● Every atom of your work will impact the way millions of businesses are run.
● You will work with some of the brightest minds who will celebrate your quirks.
● You will find growth & fun to be two-way streets - how you thrive and the way you jive, in turn drives Open
Job Title: DevOps SDE llI
Job Summary
Porter seeks an experienced cloud and DevOps engineer to join our infrastructure platform team. This team is responsible for the organization's cloud platform, CI/CD, and observability infrastructure. As part of this team, you will be responsible for providing a scalable, developer-friendly cloud environment by participating in the design, creation, and implementation of automated processes and architectures to achieve our vision of an ideal cloud platform.
Responsibilities and Duties
In this role, you will
- Own and operate our application stack and AWS infrastructure to orchestrate and manage our applications.
- Support our application teams using AWS by provisioning new infrastructure and contributing to the maintenance and enhancement of existing infrastructure.
- Build out and improve our observability infrastructure.
- Set up automated auditing processes and improve our applications' security posture.
- Participate in troubleshooting infrastructure issues and preparing root cause analysis reports.
- Develop and maintain our internal tooling and automation to manage the lifecycle of our applications, from provisioning to deployment, zero-downtime and canary updates, service discovery, container orchestration, and general operational health.
- Continuously improve our build pipelines, automated deployments, and automated testing.
- Propose, participate in, and document proof of concept projects to improve our infrastructure, security, and observability.
Qualifications and Skills
Hard requirements for this role:
- 5+ years of experience as a DevOps / Infrastructure engineer on AWS.
- Experience with git, CI / CD, and Docker. (We use GitHub, GitHub actions, Jenkins, ECS and Kubernetes).
- Experience in working with infrastructure as code (Terraform/CloudFormation).
- Linux and networking administration experience.
- Strong Linux Shell scripting experience.
- Experience with one programming language and cloud provider SDKs. (Python + boto3 is preferred)
- Experience with configuration management tools like Ansible and Packer.
- Experience with container orchestration tools. (Kubernetes/ECS).
- Database administration experience and the ability to write intermediate-level SQL queries. (We use Postgres)
- AWS SysOps administrator + Developer certification or equivalent knowledge
Good to have:
- Experience working with ELK stack.
- Experience supporting JVM applications.
- Experience working with APM tools is good to have. (We use datadog)
- Experience working in a XaaC environment. (Packer, Ansible/Chef, Terraform/Cloudformation, Helm/Kustomise, Open policy agent/Sentinel)
- Experience working with security tools. (AWS Security Hub/Inspector/GuardDuty)
- Experience with JIRA/Jira help desk.
- Python knowledge: object-oriented programming: inheritance, abstract classes, dataclass, dependency injection, design patterns: comand-query, repository, adapter, hexagonal architecture, swagger/Open API, flask, connexion
- Experience on AWS services: lambda, ecs, sqs, s3, dynamodb, auroradb
- Experience with following libraries boto3, behave, pytest, moto, localstack, docker
- Basic knowledge about terraform, gitlab ci
- Experience with SQL DB
Python Developer
6-8 Years
Mumbai
N.p only immediate or who is serving LwD is 1st week of july.
- Python knowledge: object-oriented programming: inheritance, abstract classes, dataclass, dependency injection, design patterns: comand-query, repository, adapter, hexagonal architecture, swagger/Open API, flask, connexion
- Experience on AWS services: lambda, ecs, sqs, s3, dynamodb, auroradb
- Experience with following libraries boto3, behave, pytest, moto, localstack, docker
- Basic knowledge about terraform, gitlab ci
- Experience with SQL DB
Responsibilities:
- Design, implement, and maintain cloud infrastructure solutions on Microsoft Azure, with a focus on scalability, security, and cost optimization.
- Collaborate with development teams to streamline the deployment process, ensuring smooth and efficient delivery of software applications.
- Develop and maintain CI/CD pipelines using tools like Azure DevOps, Jenkins, or GitLab CI to automate build, test, and deployment processes.
- Utilize infrastructure-as-code (IaC) principles to create and manage infrastructure deployments using Terraform, ARM templates, or similar tools.
- Manage and monitor containerized applications using Azure Kubernetes Service (AKS) or other container orchestration platforms.
- Implement and maintain monitoring, logging, and alerting solutions for cloud-based infrastructure and applications.
- Troubleshoot and resolve infrastructure and deployment issues, working closely with development and operations teams.
- Ensure high availability, performance, and security of cloud infrastructure and applications.
- Stay up-to-date with the latest industry trends and best practices in cloud infrastructure, DevOps, and automation.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent work experience).
- Minimum of four years of proven experience working as a DevOps Engineer or similar role, with a focus on cloud infrastructure and deployment automation.
- Strong expertise in Microsoft Azure services, including but not limited to Azure Virtual Machines, Azure App Service, Azure Storage, Azure Networking, Azure Security, and Azure Monitor.
- Proficiency in infrastructure-as-code (IaC) tools such as Terraform or ARM templates.
- Hands-on experience with containerization and orchestration platforms, preferably Azure Kubernetes Service (AKS) or Docker Swarm.
- Solid understanding of CI/CD principles and experience with relevant tools such as Azure DevOps, Jenkins, or GitLab CI.
- Experience with scripting languages like PowerShell, Bash, or Python for automation tasks.
- Strong problem-solving and troubleshooting skills with a proactive and analytical mindset.
- Excellent communication and collaboration skills, with the ability to work effectively in a team environment.
- Azure certifications (e.g., Azure Administrator, Azure DevOps Engineer, Azure Solutions Architect) are a plus.
at Upswing Financial Technologies Private Limited
At Upswing, we are committed to building a robust, scalable & secure API platform to power the world of Open Finance.
We are a passionate and self-driven team of thinkers who aspire to build the rails to connect the legacy financial sector with financial innovators through a simple and powerful banking-as-a-service (BaaS) platform.
We are looking for motivated engineers who will be working in a highly creative and cutting-edge technology environment to build a world-class financial services suite.
About the role
As part of the DevSecOps team at Upswing, you will get to work on building state-of-the-art infrastructure for the future. You will also be –
- Managing security aspects of the Cloud Infrastructure
- Designing and Implementing Security measures, Incident Response guidelines
- Conducting Security Awareness Training
- Developing SIEM tooling and pipelines end to end for vulnerability/security/incident reporting
- Developing automation and performing routine VAPT for Network and Applications
- Integrating with 3rd party vendors for the services required to improve security posture
- Mentoring people across the teams to enable best practices
What will you do if you join us?
- Engage in a lot of cross-team collaboration to independently drive forward DevSecOps practices across the org
- Take Ownership of existing, ongoing, and future DevSecOps initiatives
- Plan and Engage in Architecture discussions to bring in different angles (especially security angles) to the table
- Build Automation stack and tools for security pipeline
- Integrate different security measures and pipelines with the SIEM tool
- Conducting routine VAPT using manual and automated workflows, generating and maintaining the report for the same
- Introduce and Implement best practices across teams for a great security posture in the org
You should have
- Curiosity for on-the-job learning and experimenting with new technologies and ideas
- A strong background in Linux environment
- Proven experience in Architecting networks with security first implementation
- Experience with VAPT tooling for Networks and Applications is required
- Strong experience in Cloud technologies, multi-cloud environments, and best practices in Cloud
- Experience with at least one scripting language (Ruby/Python/Groovy)
- Experience in Terraform is highly desirable but not mandatory
- Some experience with Kubernetes, and Docker is required
- Understanding Java web applications and monitoring them for security vulnerabilities would be a plus
- Any other DevSecOps-related experience will be considered
at Upswing Financial Technologies Private Limited
As part of the Cloud Platform / Devops team at Upswing, you will get to work on building state-of-the-art infrastructure for the future. You will also be –
- Building Infrastructure on AWS driven through terraform and building automation tools for deployment, infrastructure management, and observability stack
- Building and Scaling on Kubernetes
- Ensuring the Security of Upswing Cloud Infra
- Building Security Checks and automation to improve overall security posture
- Building automation stack for components like JVM-based applications, Apache Pulsar, MongoDB, PostgreSQL, Reporting Infra, etc.
- Mentoring people across the teams to enable best practices
- Mentoring and guiding team members to upskill and helm them develop work class Fintech Infrastructure
What will you do if you join us?
- Write a lot of code
- Engage in a lot of cross-team collaboration to independently drive forward infrastructure initiatives and Devops practices across the org
- Taking Ownership of existing, ongoing, and future initiatives
- Plan Architecture- for upcoming infrastructure
- Build for Scale, Resiliency & Security
- Introduce best practices wrt Devops & Cloud in the team
- Mentor new/junior team members and eventually build your own team
You should have
- Curiosity for on-the-job learning and experimenting with new technologies and ideas
- A strong background in Linux environment
- Must have Programming skills and Experience
- Strong experience in Cloud technologies, Security and Networking concepts, Multi-cloud environments, etc.
- Experience with at least one scripting language (GoLang/Python/Ruby/Groovy)
- Experience in Terraform is highly desirable but not mandatory
- Experience with Kubernetes and Docker is required
- Understanding of the Java Technologies and Stack
- Any other Devops related experience will be considered
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
To know more, Visit! - https://www.happyfox.com/
Responsibilities:
- Build and scale production infrastructure in AWS for the HappyFox platform and its products.
- Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
- Implement consistent observability, deployment and IaC setups
- Patch production systems to fix security/performance issues
- Actively respond to escalations/incidents in the production environment from customers or the support team
- Mentor other Infrastructure engineers, review their work and continuously ship improvements to production infrastructure.
- Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
- Participate in infrastructure security audits
Requirements:
- At least 5 years of experience in handling/building Production environments in AWS.
- At least 2 years of programming experience in building API/backend services for customer-facing applications in production.
- Demonstrable knowledge of TCP/IP, HTTP and DNS fundamentals.
- Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
- Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts using any scripting language such as Python, Ruby, Bash etc.,
- Experience in setting up and managing test/staging environments, and CI/CD pipelines.
- Experience in IaC tools such as Terraform or AWS CDK
- Passion for making systems reliable, maintainable, scalable and secure.
- Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
- Bonus points – if you have experience with Nginx, Postgres, Redis, and Mongo systems in production.
Job Responsibilities:
Work & Deploy updates and fixes Provide Level 2 technical support Support implementation of fully automated CI/CD pipelines as per dev requirement Follow the escalation process through issue completion, including providing documentation after resolution Follow regular Operations procedures and complete all assigned tasks during the shift. Assist in root cause analysis of production issues and help write a report which includes details about the failure, the relevant log entries, and likely root cause Setup of CICD frameworks (Jenkins / Azure DevOps Server), Containerization using Docker, etc Implement continuous testing, Code Quality, Security using DevOps tooling Build a knowledge base by creating and updating documentation for support
Skills Required:
DevOps, Linux, AWS, Ansible, Jenkins, GIT, Terraform, CI, CD, Cloudformation, Typescript
Company- Accionlabs Technologies[www.accionlabs.com]
Location- Bengluru
Work Type- Permanent
Salary- Open
Its work from office job
Key Aspects of Role
Leverage deep knowledge of the full technology stack to help achieve business objectives and
customer outcomes
Collaborate with Product Management to validate the technical feasibility of and establish non-
functional requirements
Collaborate with Architecture to evolve architecture to solve technical challenges, support
future requirements, scale effectively, continually meet/exceed SLAs and resolve tech debt
Technical advisor to internal or external stakeholders on complex technical components
Technical leader working with the team to help remove blockers and act as a tie breaker
Adjust the team processes, listening to feedback and guiding the team through change and
driving continuous improvement
Guide, teach, and mentor team, providing feedback and moderating discussions
Represent the interests of the team in cross functional meetings
Maintain and proactively share knowledge of current technology and industry trends
Work closely with peers to ensure the team is aligning with cloud native, lean/Agile/DevOps &
12 Factor Application best practices ensuring rapid value delivery and with quality
Collaborate with other Principal Engineer’s to drive engineering best practices around testing,
CI/CD, GitOps, TDD, architectural alignment, and relentless automation
Excellent understanding and familiarity with Cloud Native and 12 Factor Principles,
Microservices, Lean Principles, DevOps, Test Driven Development (TDD), Extreme Programming
(XP), Observability / Monitoring
Required Skills
Coding experience in Java
Extensive hands-on experience working with AWS cloud products and services
Experience with popular open-source software such as Postgres, RabbitMQ, Elasticsearch, Redis
and Couchbase
Experience working with NodeJS, React/Redux, Docker Swarm, Kubernetes
Experience with development frameworks such as the Spring/Spring Boot framework, Hibernate
and knowledge of advanced SQL
Proficiency with modern object-oriented languages/frameworks, Terraform, Kubernetes, AWS,
Data Streaming
Knowledge of containers and container orchestration platforms, preferably Kubernetes
Experience delivering services using distributed architectures: Microservices, SOA, RESTful APIs
and data integration architectures
Knowledge of containers and container orchestration platforms, preferably Kubernetes
Advanced Architecture and system design skills and principles
Excellent organizational skills and can drive a cross-team strategic project or initiative
Solid coaching, mentorship and technical leadership to help others grow
Able to drive consensus/commitment within and across teams and departments
Advanced critical thinking and problem solving on complex issues and customer concerns.
Strategic thinker beyond immediate needs, considering the longer-term
Excellent communication skills, with ability to communicate highly complex technical concepts
Demonstrate high level of empathy with internal colleagues, stakeholders and customers
at LS Spectrum Solutions Private Limited
Job Description
▪ You are responsible for setting up, operating, and monitoring LS system solutions on premise and in the cloud
▪ You are responsible for the analysis and long-term elimination of system errors
▪ You provide support in the area of information and IT security
▪ You will work on the strategic further development and optimize the platform used
▪ You will work in a global, international team requirement profile
▪ You have successfully completed an apprenticeship / degree in the field of IT
▪ You can demonstrate in-depth knowledge and experience in the following areas:
▪ PostgreSQL databases
▪ Linux (e.g. Ubuntu, Oracle Linux, RHEL)
▪ Windows (e.g. Windows Server 2019/2022)
▪ Automation / IaC (e.g. Ansible, Terraform)
▪ Containerization with Kubernetes / Virtualization with Vmware is an advantage
▪ Service APIs (AWS, Azure)
▪ You have very good knowledge of English, knowledge of German is an advantage
▪ You are a born team player, show high commitment and are resilient
Main tasks
- Supervision of the CI/CD process for the automated builds and deployments of web services and web applications as well as desktop tool in the cloud and container environment
- Responsibility of the operations part of a DevOps organization especially for development in the environment of container technology and orchestration, e.g. with Kubernetes
- Installation, operation and monitoring of web applications in cloud data centers for the purpose of development of the test as well as for the operation of an own productive cloud
- Implementation of installations of the solution especially in the container context
- Introduction, maintenance and improvement of installation solutions for development in the desktop and server environment as well as in the cloud and with on-premise Kubernetes
- Maintenance of the system installation documentation and implementation of trainings
Execution of internal software tests and support of involved teams and stakeholders
- Hands on Experience with Azure DevOps.
Qualification profile
- Bachelor’s or master’s degree in communications engineering, electrical engineering, physics or comparable qualification
- Experience in software
- Installation and administration of Linux and Windows systems including network and firewalling aspects
- Experience with build and deployment automation with tools like Jenkins, Gradle, Argo, AnangoDB or similar as well as system scripting (Bash, Power-Shell, etc.)
- Interest in operation and monitoring of applications in virtualized and containerized environments in cloud and on-premise
- Server environments, especially application, web-and database servers
- Knowledge in VMware/K3D/Rancer is an advantage
- Good spoken and written knowledge of English
Job Description:
• Drive end-to-end automation from GitHub/GitLab/BitBucket to Deployment,
Observability and Enabling the SRE activities
• Guide operations support (setup, configuration, management, troubleshooting) of
digital platforms and applications
• Solid understanding of DevSecOps Workflows that support CI, CS, CD, CM, CT.
• Deploy, configure, and manage SaaS and PaaS cloud platform and applications
• Provide Level 1 (OS, patching) and Level 2 (app server instance troubleshooting)
• DevOps programming: writing scripts, building operations/server instance/app/DB
monitoring tools Set up / manage continuous build and dev project management
environment: JenkinX/GitHub Actions/Tekton, Git, Jira Designing secure networks,
systems, and application architectures
• Collaborating with cross-functional teams to ensure secure product development
• Disaster recovery, network forensics analysis, and pen-testing solutions
• Planning, researching, and developing security policies, standards, and procedures
• Awareness training of the workforce on information security standards, policies, and
best practices
• Installation and use of firewalls, data encryption and other security products and
procedures
• Maturity in understanding compliance, policy and cloud governance and ability to
identify and execute automation.
• At Wesco, we discuss more about solutions than problems. We celebrate innovation
and creativity.
Key Responsibilities
• As a part of the DevOps team, you will be responsible for the configuration, optimization, documentation, and support of the CI/CD components.
• Creating and managing build and release pipelines with Azure DevOps and Jenkins.
• Assist in planning and reviewing application architecture and design to promote an efficient deployment process.
• Troubleshoot server performance issues & handle the continuous integration system.
• Automate infrastructure provisioning using ARM Templates and Terraform.
• Monitor and Support deployment, Cloud-based and On-premises Infrastructure.
• Diagnose and develop root-cause solutions for failures and performance issues in the production environment.
• Deploy and manage Infrastructure for production applications
• Configure security best practices for application and infrastructure
Essential Requirements
• Good hands-on experience with cloud platforms like Azure, AWS & GCP. (Preferably Azure)
• Strong knowledge of CI/CD principles.
• Strong work experience with CI/CD implementation tools like Azure DevOps, Team City, Octopus Deploy, AWS Code Deploy, and Jenkins.
• Experience of writing automation scripts with PowerShell, Bash, Python, etc.
• GitHub, JIRA, Confluence, and Continuous Integration (CI) system.
• Understanding of secure DevOps practices
Good to Have -
•Knowledge of scripting languages such as PowerShell, Bash
• Experience with project management and workflow tools such as Agile, Jira, Scrum/Kanban, etc.
• Experience with Build technologies and cloud services. (Jenkins, TeamCity, Azure DevOps, Bamboo, AWS Code Deploy)
• Strong communication skills and ability to explain protocol and processes with team and management.
• Must be able to handle multiple tasks and adapt to a constantly changing environment.
• Must have a good understanding of SDLC.
• Knowledge of Linux, Windows server, Monitoring tools, and Shell scripting.
• Self-motivated; demonstrating the ability to achieve in technologies with minimal supervision.
• Organized, flexible, and analytical ability to solve problems creatively
at Nagarro Software
👋🏼We're Nagarro.
We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 33 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That is where you come in!
Please find the job details below
Experience: 7.5-10 years
REQUIREMENTS:
Must have Skills : Cloud architecture (Capable), Terraform, Kubernetes, Azure, Cloud Migration Expert,
Job Description :
- Responsible for Automating and orchestrating large scale application and database migration from on-prem to Cloud
- Expertise designing and developing enterprise grade architecture on Cloud
- Expertise in Cloud Security and create guidelines for its implementation
- Experience of managing enterprise grade cloud native Kubernetes cluster
- Expertise of the Infrastructure as a Code (IaC) using tools like Terraform, AWS CDK or other cloud native Iac services.
- Experience in monitoring tools like Prometheus & Grafana, Nagios/ CloudWatch/Data Dog/Zabbix and logging tools like Splunk/Logstash
- Rich experience in designing end to end CI/CD pipelines framework
- Must have Skills: AWS/Azure, Kubernetes, Terraform/AWS CDK, CICD
Responsibilities:
- Writing and reviewing great quality code
- Understanding functional requirements thoroughly and analysing the client’s needs in the context of the project
- Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns, and frameworks to realize it.
- Determining and implementing design methodologies and tool sets
- Enabling application development by coordinating requirements, schedules, and activities.
- Being able to lead/support UAT and production roll outs.
- Creating, understanding, and validating WBS and estimated effort for given module/task, and being able to justify it.
- Addressing issues promptly, responding positively to setbacks and challenges with a mindset of continuous improvement
- Giving constructive feedback to the team members and setting clear expectations.
- Helping the team in troubleshooting and resolving of complex bugs
- Coming up with solutions to any issue that is raised during code/design review and being able to justify the decision taken.
- Carrying out POCs to make sure that suggested design/technologies meet the requirements.
We are looking for a DevOps Engineer (individual contributor) to maintain and build upon our next-generation infrastructure. We aim to ensure that our systems are secure, reliable and high-performing by constantly striving to achieve best-in-class infrastructure and security by:
- Leveraging a variety of tools to ensure all configuration is codified (using tools like Terraform and Flux) and applied in a secure, repeatable way (via CI)
- Routinely identifying new technologies and processes that enable us to streamline our operations and improve overall security
- Holistically monitoring our overall DevOps setup and health to ensure our roadmap constantly delivers high-impact improvements
- Eliminating toil by automating as many operational aspects of our day-to-day work as possible using internally created, third party and/or open-source tools
- Maintain a culture of empowerment and self-service by minimizing friction for developers to understand and use our infrastructure through a combination of innovative tools, excellent documentation and teamwork
Tech stack: Microservices primarily written in JavaScript, Kotlin, Scala, and Python. The majority of our infrastructure sits within EKS on AWS, using Istio. We use Terraform and Helm/Flux when working with AWS and EKS (k8s). Deployments are managed with a combination of Jenkins and Flux. We rely heavily on Kafka, Cassandra, Mongo and Postgres and are increasingly leveraging AWS-managed services (e.g. RDS, lambda).
Mactores is a trusted leader among businesses in providing modern data platform solutions. Since 2008, Mactores have been enabling businesses to accelerate their value through automation by providing End-to-End Data Solutions that are automated, agile, and secure. We collaborate with customers to strategize, navigate, and accelerate an ideal path forward with a digital transformation via assessments, migration, or modernization.
We are looking for a DevOps Engineer with expertise in infrastructure as a code, configuration management, continuous integration, continuous deployment, automated monitoring for big data workloads, large enterprise applications, customer applications, and databases.
You will have hands-on technology expertise coupled with a background in professional services and client-facing skills. You are passionate about the best practices of cloud deployment and ensuring the customer expectation is set and met appropriately. If you love to solve problems using your skills, then join the Team Mactores. We have a casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself.
What you will do?
- Automate infrastructure creation with Terraform, AWS Cloud Formation
- Perform application configuration management, and application-deployment tool enabling infrastructure as code.
- Take ownership of the Build and release cycle of the customer project.
- Share the responsibility for deploying releases and conducting other operations maintenance.
- Enhance operations infrastructures such as Jenkins clusters, Bitbucket, monitoring tools (Consul), and metrics tools such as Graphite and Grafana.
- Provide operational support for the rest of the Engineering team help migrate our remaining dedicated hardware infrastructure to the cloud.
- Establish and maintain operational best practices.
- Participate in hiring culturally fit engineers in the organization, help engineers make their career paths by consulting with them.
- Design the team strategy in collaboration with founders of the organization.
What are we looking for?
- 4+ years of experience in using Terraform for IaaC
- 4+ years of configuration management and engineering for large-scale customers, ideally supporting an Agile development process.
- 4+ years of Linux or Windows Administration experience.
- 4+ years of version control systems (git), including branching and merging strategies.
- 2+ Experience in working with AWS Infrastructure, and platform services.
- 2+ Experience in cloud automation tools (Ansible, Chef).
- Exposure to working on container services like Kubernetes on AWS, ECS, and EKS
- You are extremely proactive at identifying ways to improve things and to make them more reliable.
You will be preferred if
- Expertise in multiple cloud services provider: Amazon Web Services, Microsoft Azure, Google Cloud Platform
- AWS Solutions Architect Professional or Associate Level Certificate
- AWS DevOps Professional Certificate
Life at Mactores
We care about creating a culture that makes a real difference in the lives of every Mactorian. Our 10 Core Leadership Principles that honor Decision-making, Leadership, Collaboration, and Curiosity drive how we work.
1. Be one step ahead
2. Deliver the best
3. Be bold
4. Pay attention to the detail
5. Enjoy the challenge
6. Be curious and take action
7. Take leadership
8. Own it
9. Deliver value
10. Be collaborative
We would like you to read more details about the work culture on https://mactores.com/careers
The Path to Joining the Mactores Team
At Mactores, our recruitment process is structured around three distinct stages:
Pre-Employment Assessment:
You will be invited to participate in a series of pre-employment evaluations to assess your technical proficiency and suitability for the role.
Managerial Interview: The hiring manager will engage with you in multiple discussions, lasting anywhere from 30 minutes to an hour, to assess your technical skills, hands-on experience, leadership potential, and communication abilities.
HR Discussion: During this 30-minute session, you'll have the opportunity to discuss the offer and next steps with a member of the HR team.
At Mactores, we are committed to providing equal opportunities in all of our employment practices, and we do not discriminate based on race, religion, gender, national origin, age, disability, marital status, military status, genetic information, or any other category protected by federal, state, and local laws. This policy extends to all aspects of the employment relationship, including recruitment, compensation, promotions, transfers, disciplinary action, layoff, training, and social and recreational programs. All employment decisions will be made in compliance with these principles.
at CodeCraft Technologies Private Limited
Roles and Responsibilities:
• Gather and analyse cloud infrastructure requirements
• Automating system tasks and infrastructure using a scripting language (Shell/Python/Ruby
preferred), with configuration management tools (Ansible/ Puppet/Chef), service registry and
discovery tools (Consul and Vault, etc), infrastructure orchestration tools (Terraform,
CloudFormation), and automated imaging tools (Packer)
• Support existing infrastructure, analyse problem areas and come up with solutions
• An eye for monitoring – the candidate should be able to look at complex infrastructure and be
able to figure out what to monitor and how.
• Work along with the Engineering team to help out with Infrastructure / Network automation needs.
• Deploy infrastructure as code and automate as much as possible
• Manage a team of DevOps
Desired Profile:
• Understanding of provisioning of Bare Metal and Virtual Machines
• Working knowledge of Configuration management tools like Ansible/ Chef/ Puppet, Redfish.
• Experience in scripting languages like Ruby/ Python/ Shell Scripting
• Working knowledge of IP networking, VPN's, DNS, load balancing, firewalling & IPS concepts
• Strong Linux/Unix administration skills.
• Self-starter who can implement with minimal guidance
• Hands-on experience setting up CICD from SCRATCH in Jenkins
• Experience with Managing K8s infrastructure
Responsibilities
Provisioning and de-provisioning AWS accounts for internal customers
Work alongside systems and development teams to support the transition and operation of client websites/applications in and out of AWS.
Deploying, managing, and operating AWS environments
Identifying appropriate use of AWS operational best practices
Estimating AWS costs and identifying operational cost control mechanisms
Keep technical documentation up to date
Proactively keep up to date on AWS services and developments
Create (where appropriate) automation, in order to streamline provisioning and de-provisioning processes
Lead certain data/service migration projects
Job Requirements
Experience provisioning, operating, and maintaining systems running on AWS
Experience with Azure/AWS.
Capabilities to provide AWS operations and deployment guidance and best practices throughout the lifecycle of a project
Experience with application/data migration to/from AWS
Experience with NGINX and the HTTP protocol.
Experience with configuration and management software such as GIT Strong analytical and problem-solving skills
Deployment experience using common AWS technologies like VPC, and regionally distributed EC2 instances, Docker, and more.
Ability to work in a collaborative environment
Detail-oriented, strong work ethic and high standard of excellence
A fast learner, the Achiever, sets high personal goals
Must be able to work on multiple projects and consistently meet project deadlines
Key Sills Required for Lead DevOps Engineer
Containerization Technologies
Docker, Kubernetes, OpenShift
Cloud Technologies
AWS/Azure, GCP
CI/CD Pipeline Tools
Jenkins, Azure Devops
Configuration Management Tools
Ansible, Chef,
SCM Tools
Git, GitHub, Bitbucket
Monitoring Tools
New Relic, Nagios, Prometheus
Cloud Infra Automation
Terraform
Scripting Languages
Python, Shell, Groovy
· Ability to decide the Architecture for the project and tools as per the availability
· Sound knowledge required in the deployment strategies and able to define the timelines
· Team handling skills are a must
· Debugging skills are an advantage
· Good to have knowledge of Databases like Mysql, Postgresql
It is advantageous to be familiar with Kafka. RabbitMQ
· Good to have knowledge of Web servers to deploy web applications
· Good to have knowledge of Code quality checking tools like SonarQube and Vulnerability scanning
· Advantage to having experience in DevSecOps
Note: Tools mentioned in bold are a must and others are added advantage
**THIS IS A 100% WORK FROM OFFICE ROLE**
We are looking for an experienced DevOps engineer that will help our team establish DevOps practice. You will work closely with the technical lead to identify and establish DevOps practices in the company.
You will help us build scalable, efficient cloud infrastructure. You’ll implement monitoring for automated system health checks. Lastly, you’ll build our CI pipeline, and train and guide the team in DevOps practices.
ROLE and RESPONSIBILITIES:
• Understanding customer requirements and project KPIs
• Implementing various development, testing, automation tools, and IT infrastructure
• Planning the team structure, activities, and involvement in project management
activities.
• Managing stakeholders and external interfaces
• Setting up tools and required infrastructure
• Defining and setting development, test, release, update, and support processes for
DevOps operation
• Have the technical skill to review, verify, and validate the software code developed in
the project.
• Troubleshooting techniques and fixing the code bugs
• Monitoring the processes during the entire lifecycle for its adherence and updating or
creating new processes for improvement and minimizing the wastage
• Encouraging and building automated processes wherever possible
• Identifying and deploying cybersecurity measures by continuously performing
vulnerability assessment and risk management
• Incidence management and root cause analysis
• Coordination and communication within the team and with customers
• Selecting and deploying appropriate CI/CD tools
• Strive for continuous improvement and build continuous integration, continuous
development, and constant deployment pipeline (CI/CD Pipeline)
• Mentoring and guiding the team members
• Monitoring and measuring customer experience and KPIs
• Managing periodic reporting on the progress to the management and the customer
Essential Skills and Experience Technical Skills
• Proven 3+years of experience as DevOps
• A bachelor’s degree or higher qualification in computer science
• The ability to code and script in multiple languages and automation frameworks
like Python, C#, Java, Perl, Ruby, SQL Server, NoSQL, and MySQL
• An understanding of the best security practices and automating security testing and
updating in the CI/CD (continuous integration, continuous deployment) pipelines
• An ability to conveniently deploy monitoring and logging infrastructure using tools.
• Proficiency in container frameworks
• Mastery in the use of infrastructure automation toolsets like Terraform, Ansible, and command line interfaces for Microsoft Azure, Amazon AWS, and other cloud platforms
• Certification in Cloud Security
• An understanding of various operating systems
• A strong focus on automation and agile development
• Excellent communication and interpersonal skills
• An ability to work in a fast-paced environment and handle multiple projects
simultaneously
OTHER INFORMATION
The DevOps Engineer will also be expected to demonstrate their commitment:
• to gedu values and regulations, including equal opportunities policy.
• the gedu’s Social, Economic and Environmental responsibilities and minimise environmental impact in the performance of the role and actively contribute to the delivery of gedu’s Environmental Policy.
• to their Health and Safety responsibilities to ensure their contribution to a safe and secure working environment for staff, students, and other visitors to the campus.
POSITION SUMMARY:
We are looking for a passionate, high energy individual to help build and manage the infrastructure network that powers the Product Development Labs for F5 Inc. The F5 Infra Engineer plays a critical role to our Product Development team by providing valuable services and tools for the F5 Hyderabad Product Development Lab. The Infra team supports both production systems and customized/flexible testing environments used by Test and Product Development teams. As an Infra Engineer, you ’ll have the opportunity to work with cutting-edge technology and work with talented individuals. The ideal candidate will have experience in Private and Public Cloud – AWS-AZURE-GCP, OpenStack, storage, Backup, VMware, KVM, XEN, HYPER-V Hypervisor Server Administration, Networking and Automation in Data Center Operations environment at a global enterprise scale with Kubernetes, OpenShift Container Flatforms.
EXPERIENCE
7- 9+ Years – Software Engineer III
PRIMARY RESPONSIBILITIES:
-
Drive the design, Project Build, Infrastructure setup, monitoring, measurements, and improvements around the quality of services Provided, Network and Virtual Instances service from OpenStack, VMware VIO, Public and private cloud and DevOps environments.
-
Work closely with the customers and understand the requirements and get it done on timelines.
-
Work closely with F5 architects and vendors to understand emerging technologies and F5 Product Roadmap and how they would benefit the Infra team and its users.
-
Work closely with the Team and complete the deliverables on-time
-
Consult with testers, application, and service owners to design scalable, supportable network infrastructure to meet usage requirements.
-
Assume ownership for large/complex systems projects; mentor Lab Network Engineers in the best practices for ongoing maintenance and scaling of large/complex systems.
-
Drive automation efforts for the configuration and maintainability of the public/private Cloud.
-
Lead product selection for replacement or new technologies
-
Address user tickets in a timely manner for the covered services
-
Responsible for deploying, managing, and supporting production and pre-production environments for our core systems and services.
-
Migration and consolidations of infrastructure
-
Design and implement major service and infrastructure components.
-
Research, investigate and define new areas of technology to enhance existing service or new service directions.
-
Evaluate performance of services and infrastructure; tune, re-evaluate the design and implementation of current source code and system configuration.
-
Create and maintain scripts and tools to automate the configuration, usability and troubleshooting of the supported applications and services.
-
Ability to take ownership on activities and new initiatives.
-
Infra Global Support from India towards product Development teams.
-
On-call support on a rotational basis for a global turn-around time-zones
-
Vendor Management for all latest hardware and software evaluations keep the system up-to-date.
KNOWLEDGE, SKILLS AND ABILITIES:
-
Have an in-depth multi-disciplined knowledge of Storage, Compute, Network, DevOps technologies and latest cutting-edge technologies.
-
Multi-cloud - AWS, Azure, GCP, OpenStack, DevOps Operations
-
IaaS- Infrastructure as a service, Metal as service, Platform service
-
Storage – Dell EMC, NetApp, Hitachi, Qumulo and Other storage technologies
-
Hypervisors – (VMware, Hyper-V, KVM, Xen and AHV)
-
DevOps – Kubernetes, OpenShift, docker, other container and orchestration flatforms
-
Automation – Scripting experience python/shell/golan , Full Stack development and Application Deployment
-
Tools - Jenkins, splunk, kibana, Terraform, Bitbucket, Git, CI/CD configuration.
-
Datacenter Operations – Racking, stacking, cable matrix, Solution Design and Solutions Architect
-
Networking Skills – Cisco/Arista Switches, Routers, Experience on Cable matrix design and pathing (Fiber/copper)
-
Experience in SAN/NAS storage – (EMC/Qumulo/NetApp & others)
-
Experience with Red Hat Ceph storage.
-
A working knowledge of Linux, Windows, and Hypervisor Operating Systems and virtual machine technologies
-
SME - subject matter expert for all cutting-edge technologies
-
Data center architect professional & Storage Expert level Certified professional experience .
-
A solid understanding of high availability systems, redundant networking and multipathing solutions
-
Proven problem resolution related to network infrastructure, judgment, negotiating and decision-making skills along with excellent written and oral communication skills.
-
A Working experience in Object – Block – File storage Technologies
-
Experience in Backup Technologies and backup administration.
-
Dell/HP/Cisco UCS server’s administration is an additional advantage.
-
Ability to quickly learn and adopt new technologies.
-
A very very story experience and exposure towards open-source flatforms.
-
A working experience on monitoring tools Zabbix, nagios , Datadog etc ..
-
A working experience on and BareMetal services and OS administration.
-
A working experience on the cloud like AWS- ipsec, Azure - express route, GCP – Vpn tunnel etc.
-
A working experience in working using software define network like (VMware NSX, SDN, Openvswitch etc ..)
-
A working experience with systems engineering and Linux /Unix administration
-
A working experience with Database administration experience with PostgreSQL, MySQL, NoSQL
-
A working experience with automation/configuration management using either Puppet, Chef or an equivalent
-
A working experience with DevOps Operations Kubernetes, container, Docker, and git repositories
-
Experience in Build system process and Code-inspect and delivery methodologies.
-
Knowledge on creating Operational Dashboards and execution lane.
-
Experience and knowledge on DNS, DHCP, LDAP, AD, Domain-controller services and PXE Services
-
SRE experience in responsible for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning.
-
Vendor support – OEM upgrades, coordinating technical support and troubleshooting experience.
-
Experience in handling On-call Support and hierarchy process.
-
Knowledge on scale-out and scale-in architecture.
-
Working experience in ITSM / process Management tools like ServiceNow, Jira, Jira Align.
-
Knowledge on Agile and Scrum principles
-
Working experience with ServiceNow
-
Knowledge sharing, transition experience and self-learning Behavioral.
JD:
- Demonstrable hands-on experience working with Azure technologies.
- Extensive experience with Azure DevOps and pipelines.
- Experience working with Jira and Github.
- Experience of Infrastructure as Code using Terraform to model cloud infrastructure.
- Practical experience of engineering best practices with continuous improvement.
Required
- HashiCorp Certified: Terraform Associate
- Microsoft Certified: Azure Fundamentals
Desirable
Microsoft Certified: Azure Administrator Associate
Location- Remote
Job Type: Full time on contract