Cutshort logo
Reqroots logo
Sr.Software Engineer(Devops)
Sr.Software Engineer(Devops)
Reqroots's logo

Sr.Software Engineer(Devops)

Dhanalakshmi D's profile picture
Posted by Dhanalakshmi D
4 - 6 yrs
₹10L - ₹15L / yr
Bengaluru (Bangalore)
Skills
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)

We are looking "Sr.Software Engineer(Devops)" for Reputed Client @ Bangalore Permanent Role.

Experience: 4+ Yrs

Responsibilities:

• As part of a team you will design, develop, and maintain scalable multi cloud DevOps blueprint.

• Understand overall virtualization platform architecture in cloud environments and design best of class solutions that fit the SaaS offering & legacy application modernization

• Continuously improve CI/CD pipeline, tools, processes and procedures and systems relating to Developer Productivity

• Collaborate continuously with the product development teams to implement CI/CD pipeline.

• Contribute to the subject matter on Developer Productivity, DevOps, Infrastructure Automation best practices.


Mandatory Skills:

• 1+ years of commercial server-side software development experience & 3+ years of commercial DevOps experience.

• Strong scripting skills (Java or Python) is a must.

• Experience with automation tools such as Ansible, Chef, Puppet etc.

• Hands-on experience with CI/CD tools such as GitLab, Jenkins, Nexus, Artifactory, Maven, Gradle

• Hands-on working experience in developing or deploying microservices is a must.

• Hands-on working experience of at least of the popular cloud infrastructure such as AWS / Azure / GCP / Red Hat OpenStack is a must.

• Knowledge about microservices hosted in leading cloud environments

• Experience with containerizing applications (Docker preferred) is a must

• Hands-on working experience of automating deployment, scaling, and management of containerized applications (Kubernetes) is a must.

• Strong problem-solving, analytical skills and good understanding of the best practices for building, testing, deploying and monitoring software


Mandatory Skills:

• Experience working with Secret management services such as HashiCorp Vault is desirable.

• Experience working with Identity and access management services such as Okta, Cognito is desirable.

• Experience with monitoring systems such as Prometheus, Grafana is desirable.


Educational Qualifications and Experience:

• B.E/B.Tech/MCA/M.Tech (Computer science/Information science/Information Technology is a Plus)

• 4 to 6 years of hands-on experience in server-side application development & DevOps

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About Reqroots

Founded :
2011
Type :
Products & Services
Size :
20-100
Stage :
Raised funding

About

Kovai Soft Technologies is the leading Mobile application development, Software development, SEO company in Chennai, South India. Founded in 2009, and is known for its holistic approach in designing, developing and delivering end to end Web and Mobile development solutions with a higher customer retention rate.
Read more

Connect with the team

Profile picture
Dhanalakshmi D
Profile picture
Revathi Shankar
Profile picture
Revathi Shankar
Profile picture
Abinesh Suresh
Profile picture
Sathya Priya
Profile picture
Gayathri G
Profile picture
Hari Priya

Company social profiles

instagramlinkedintwitterfacebook

Similar jobs

Infilect
at Infilect
3 recruiters
Indira Ashrit
Posted by Indira Ashrit
Bengaluru (Bangalore)
2 - 3 yrs
₹12L - ₹15L / yr
skill iconKubernetes
skill iconDocker
cicd
Google Cloud Platform (GCP)

Job Description:


Infilect is a GenAI company pioneering the use of Image Recognition in Consumer Packaged Goods retail.


We are looking for a Senior DevOps Engineer to be responsible and accountable for the smooth running of our Cloud, AI workflows, and AI-based Computer Systems. Furthermore, the candidate will supervise the implementation and maintenance of the company’s computing needs including the in-house GPU & AI servers along with AI workloads.



Responsibilities

  • Understanding and automating AI based deployment an AI based workflows
  • Implementing various development, testing, automation tools, and IT infrastructure
  • Manage Cloud, computer systems and other IT assets.
  • Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
  • Design, develop, implement, and coordinate systems, policies, and procedures for Cloud and on-premise systems
  • Ensure the security of data, network access, and backup systems
  • Act in alignment with user needs and system functionality to contribute to organizational policy
  • Identify problematic areas, perform RCA and implement strategic solutions in time
  • Preserve assets, information security, and control structures
  • Handle monthly/annual cloud budget and ensure cost effectiveness


Requirements and skills

  • Well versed in automation tools such as Docker, Kubernetes, Puppet, Ansible etc.
  • Working Knowledge of Python, SQL database stack or any full-stack with relevant tools.
  • Understanding agile development, CI/CD, sprints, code reviews, Git and GitHub/Bitbucket workflows
  • Well versed with ELK stack or any other logging, monitoring and analysis tools
  • Proven working experience of 2+ years as an DevOps/Tech lead/IT Manager or relevant positions
  • Excellent knowledge of technical management, information analysis, and of computer hardware/software systems
  • Hands-on experience with computer networks, network administration, and network installation
  • Knowledge in ISO/SOC Type II implementation with be a 
  • BE/B.Tech/ME/M.Tech in Computer Science, IT, Electronics or a similar field


Read more
Designing a generic ML platform as a product.
Designing a generic ML platform as a product.
Agency job
via Qrata by Blessy Fernandes
Bengaluru (Bangalore)
4 - 8 yrs
₹25L - ₹50L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Requirements

  • 3+ years work experience writing clean production code
  • Well versed with maintaining infrastructure as code (Terraform, Cloudformation etc). High proficiency with Terraform / Terragrunt is absolutely critical
  • Experience of setting CI/CD pipelines from scratch
  • Experience with AWS(EC2, ECS, RDS, Elastic Cache etc), AWS lambda, Kubernetes, Docker, ServiceMesh
  • Experience with ETL pipelines, Bigdata infra
  • Understanding of common security issues

Roles / Responsibilities:

  • Write terraform modules for deploying different component of infrastructure in AWS like Kubernetes, RDS, Prometheus, Grafana, Static Website
  • Configure networking, autoscaling. continuous deployment, security and multiple environments
  • Make sure the infrastructure is SOC2, ISO 27001 and HIPAA compliant
  • Automate all the steps to provide a seamless experience to developers.
Read more
Infra360 Solutions Pvt Ltd
at Infra360 Solutions Pvt Ltd
2 candid answers
HR Infra360
Posted by HR Infra360
Gurugram
3 - 7 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Please Apply - https://zrec.in/L51Qf?source=CareerSite


About Us

Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.


Job Description

Job Title:             Senior DevOps Engineer (Infrastructure/SRE)

Department:       Technology

Location:             Gurgaon

Work Mode:         On-site

Working Hours:   10 AM - 7 PM 

Terms:                 Permanent

Experience:      4-6 years

Education:           B.Tech/MCA

Notice Period:     Immediately



About Us

At Infra360.io, we are a next-generation cloud consulting and services company committed to delivering comprehensive, 360-degree solutions for cloud, infrastructure, DevOps, and security. We partner with clients to transform and optimize their technology landscape, ensuring resilience, scalability, cost efficiency and innovation.

Our core services include Cloud Strategy, Site Reliability Engineering (SRE), DevOps, Cloud Security Posture Management (CSPM), and related Managed Services. We specialize in driving operational excellence across multi-cloud environments, helping businesses achieve their goals with agility and reliability.

We thrive on ownership, collaboration, problem-solving, and excellence, fostering an environment where innovation and continuous learning are at the forefront. Join us as we expand and redefine what’s possible in cloud technology and infrastructure.


Role Summary

We are looking for a Senior DevOps Engineer (Infrastructure) to design, automate, and manage cloud-based and datacentre infrastructure for diverse projects. The ideal candidate will have deep expertise in a public cloud platform (AWS, GCP, or Azure), with a strong focus on cost optimization, security best practices, and infrastructure automation using tools like Terraform and CI/CD pipelines.

This role involves designing scalable architectures (containers, serverless, and VMs), managing databases, and ensuring system observability with tools like Prometheus and Grafana. Strong leadership, client communication, and team mentoring skills are essential. Experience with VPN technologies and configuration management tools (Ansible, Helm) is also critical. Multi-cloud experience and familiarity with APM tools are a plus.


Ideal Candidate Profile


  • Solid 4-6 years of experience as a DevOps engineer with a proven track record of architecting and automating solutions on Cloud
  • Experience in troubleshooting production incidents and handling high-pressure situations.
  • Strong leadership skills and the ability to mentor team members and provide guidance on best practices.
  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
  • Extensive experience with Kubernetes, Terraform, ArgoCD, and Helm.
  • Strong with at least one public cloud AWS/GCP/Azure
  • Strong with Cost Optimization and Security Best practices
  • Strong with Infrastructure automation using Terraform and CI/CD automation
  • Strong with Configuration Management using Ansible, Helm etc
  • Good with designing architectures (Containers, Serverless, VMs etc)
  • Hands-on Experience working on Multiple Projects
  • Strong with Client communication and requirements gathering
  • Databases management experience
  • Good experience with Prometheus, Grafana & Alert Manager
  • Able to manage multiple clients and take ownership of client issues.
  • Experience with Git and coding best practices
  • Proficiency in cloud networking, including VPCs, DNS, VPNs (OpenVPN, OpenSwan, Pritunl, Site-to-Site VPNs), load balancers, and firewalls, ensuring secure and efficient connectivity.
  • Strong understanding of cloud security best practices, identity and access management (IAM), and compliance requirements for modern infrastructure.


Good to have

  • Multi-cloud experience with AWS, GCP & Azure
  • Experience with APM & Observability tools like - Newrelic, Datadog, and OpenTelemetry
  • Proficiency in scripting languages (Python, Go) for automation and tooling to improve infrastructure and application reliability.


Key Responsibilities


  1. Design and Development:
  2. Architect, design, and develop high-quality, scalable, and secure cloud-based software solutions.
  3. Collaborate with product and engineering teams to translate business requirements into technical specifications.
  4. Write clean, maintainable, and efficient code, following best practices and coding standards.
  5. Cloud Infrastructure:
  6. Develop and optimise cloud-native applications, leveraging cloud services like AWS, Azure, or Google Cloud Platform (GCP).
  7. Implement and manage CI/CD pipelines for automated deployment and testing.
  8. Ensure the security, reliability, and performance of cloud infrastructure.
  9. Technical Leadership:
  10. Mentor and guide junior engineers, providing technical leadership and fostering a collaborative team environment.
  11. Participate in code reviews, ensuring adherence to best practices and high-quality code delivery.
  12. Lead technical discussions and contribute to architectural decisions.
  13. Problem Solving and Troubleshooting:
  14. Identify, diagnose, and resolve complex software and infrastructure issues.
  15. Perform root cause analysis for production incidents and implement preventative measures.
  16. Continuous Improvement:
  17. Stay up-to-date with the latest industry trends, tools, and technologies in cloud computing and software engineering.
  18. Contribute to the continuous improvement of development processes, tools, and methodologies.
  19. Drive innovation by experimenting with new technologies and solutions to enhance the platform.
  20. Collaboration:
  21. Work closely with DevOps, QA, and other teams to ensure smooth integration and delivery of software releases.
  22. Communicate effectively with stakeholders, including technical and non-technical team members.
  23. Client Interaction & Management: 
  24. Will serve as a direct point of contact for multiple clients.
  25. Able to handle the unique technical needs and challenges of two or more clients concurrently. 
  26. Involve both direct interaction with clients and internal team coordination.
  27. Production Systems Management: 
  28. Must have extensive experience in managing, monitoring, and debugging production environments. 
  29. Will work on troubleshooting complex issues and ensure that production systems are running smoothly with minimal downtime.
Read more
Atmecs Ltd
Atmecs Ltd
Agency job
via Dangi Digital Media LLP by jaibir dangi
Hyderabad, Bengaluru (Bangalore), Coimbatore
4 - 8 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more
Must have Proficient exp of minimum 4 years into DevOps with at least one devops end to end project implementation.
Strong expertise on DevOps concepts like Continuous Integration (CI), Continuous delivery (CD) and Infrastructure as Code, Cloud deployments.
Minimum exp of 2.5-3 years of Configuration, development and deployment with their underlying technologies including Docker/Kubernetes and Prometheus.
Should have implemented an end to end devops pipeline using Jenkins or any similar framework.
Experience with Microservices architecture.
Sould have sound knowledge in branching and merging strategies.
Experience working with cloud computing technologies like Oracle Cloud *(preferred) /GCP/AWS/OpenStack
Strong experience in AWS/Azure/GCP/open stack , deployment process, dockerization.
Good experience in release management tools like JIRA or similar tools.
Good to have Knowledge of Infra automation tools Terraform/CHEF/ANSIBLE (Preferred)
Experience in test automation tools like selenium/cucumber/postman
Good communication skills to present devops solutions to the client and drive the implementation.
Experience in creating and managing custom operational and monitoring scripts.
Good knowledge in source control tools like Subversion, Git,bitbucket, clearcase.
Experience in system architecture design
Read more
LogiNext
at LogiNext
1 video
7 recruiters
Rakhi Daga
Posted by Rakhi Daga
Mumbai
8 - 10 yrs
₹1L - ₹1L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

LogiNext is looking for a technically savvy and passionate Principal DevOps Engineer or Senior Database Administrator to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust infrastructure.

You have hands-on experience in building secure, high-performing and scalable infrastructure. You have experience to automate and streamline the development operations and processes. You are a master in troubleshooting and resolving issues in dev, staging and production environments.


Responsibilities:


Design and implement scalable infrastructure for delivering and running web, mobile and big data applications on cloud Scale and optimise a variety of SQL and NoSQL databases (especially MongoDB), web servers, application frameworks, caches, and distributed messaging systems Automate the deployment and configuration of the virtualized infrastructure and the entire software stack Plan, implement and maintain robust backup and restoration policies ensuring low RTO and RPO Support several Linux servers running our SaaS platform stack on AWS, Azure, IBM Cloud, Ali Cloud Define and build processes to identify performance bottlenecks and scaling pitfalls Manage robust monitoring and alerting infrastructure Explore new tools to improve development operations to automate daily tasks Ensure High Availability and Auto-failover with minimum or no manual interventions


Requirements:


Bachelor’s degree in Computer Science, Information Technology or a related field 8 to 10 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure Strong background in Linux/Unix Administration and Python/Shell Scripting Extensive experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure Experience in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms Experience in query analysis, peformance tuning, database redesigning, Experience in enterprise application development, maintenance and operations Knowledge of best practices and IT operations in an always-up, always-available service Excellent written and oral communication skills, judgment and decision-making skills

Read more
APT Portfolio
at APT Portfolio
1 recruiter
Ankita  Pachauri
Posted by Ankita Pachauri
Delhi, Gurugram, Bengaluru (Bangalore)
10 - 15 yrs
₹50L - ₹70L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+13 more

A.P.T Portfolio, a high frequency trading firm that specialises in Quantitative Trading & Investment Strategies.Founded in November 2009, it has been a major liquidity provider in global Stock markets. 


As a manager, you would be incharge of managing the devops team and your remit shall include the following

  • Private Cloud - Design & maintain a high performance and reliable network architecture to support  HPC applications
  • Scheduling Tool - Implement and maintain a HPC scheduling technology like Kubernetes, Hadoop YARN  Mesos, HTCondor or Nomad for processing & scheduling analytical jobs. Implement controls which allow analytical jobs to seamlessly utilize ideal capacity on the private cloud. 
  • Security - Implementing best security practices and implementing data isolation policy between different divisions internally. 
  • Capacity Sizing - Monitor private cloud usage and share details with different teams. Plan capacity enhancements on a quarterly basis. 
  • Storage solution - Optimize storage solutions like NetApp, EMC, Quobyte for analytical jobs. Monitor their performance on a daily basis to identify issues early.
  • NFS - Implement and optimize latest version of NFS for our use case. 
  • Public Cloud - Drive AWS/Google-Cloud utilization in the firm for increasing efficiency, improving collaboration and for reducing cost. Maintain the environment for our existing use cases. Further explore potential areas of using public cloud within the firm. 
  • BackUps  - Identify and automate  back up of all crucial data/binary/code etc in a secured manner at such duration warranted by the use case. Ensure that recovery from back-up is tested and seamless. 
  •  Access Control  - Maintain password less access control and improve security over time. Minimize failures for automated job due to unsuccessful logins. 
  •  Operating System  -Plan, test and roll out new operating system for all production, simulation and desktop environments. Work closely with developers to highlight new performance enhancements capabilities of new versions. 
  •  Configuration management  -Work closely with DevOps/ development team to freeze configurations/playbook for various teams & internal applications. Deploy and maintain standard tools such as Ansible, Puppet, chef etc for the same. 
  •  Data Storage & Security Planning  - Maintain a tight control of root access on various devices. Ensure root access is rolled back as soon the desired objective is achieved.
  • Audit access logs on devices. Use third party tools to put in a monitoring mechanism for early detection of any suspicious activity. 
  • Maintaining all third party tools used for development and collaboration - This shall include maintaining a fault tolerant   environment for GIT/Perforce, productivity tools such as Slack/Microsoft team, build tools like Jenkins/Bamboo etc


Qualifications 

  • Bachelors or Masters Level Degree, preferably in CSE/IT
  • 10+ years of relevant experience in sys-admin function
  • Must have strong knowledge of IT Infrastructure, Linux, Networking and grid.
  • Must have strong grasp of automation & Data management tools.
  • Efficient in scripting languages and python


Desirables

  • Professional attitude, co-operative and mature approach to work, must be focused, structured and well considered, troubleshooting skills.
  •  Exhibit a high level of individual initiative and ownership, effectively collaborate with other team members.

 

APT Portfolio is an equal opportunity employer

Read more
one of our MNC Client
one of our MNC Client
Agency job
via CETPA InfoTech by priya Gautam
Noida, Delhi, Gurugram, Ghaziabad, Faridabad
1 - 10 yrs
₹5L - ₹30L / yr
skill iconDocker
skill iconKubernetes
DevOps
Linux/Unix
SQL Azure
+9 more

Mandatory:
● A minimum of 1 year of development, system design or engineering experience ●
Excellent social, communication, and technical skills
● In-depth knowledge of Linux systems
● Development experience in at least two of the following languages: Php, Go, Python,
JavaScript, C/C++, Bash
● In depth knowledge of web servers (Apache, NgNix preferred)
● Strong in using DevOps tools - Ansible, Jenkins, Docker, ELK
● Knowledge to use APM tools, NewRelic is preferred
● Ability to learn quickly, master our existing systems and identify areas of improvement
● Self-starter that enjoys and takes pride in the engineering work of their team ● Tried
and Tested Real-world Cloud Computing experience - AWS/ GCP/ Azure ● Strong
Understanding of Resilient Systems design
● Experience in Network Design and Management
Read more
MTX
at MTX
2 recruiters
Sinchita S
Posted by Sinchita S
Hyderabad
7 - 10 yrs
₹38L - ₹56L / yr
DevOps
CI/CD
Google Cloud Platform (GCP)
skill iconPostgreSQL
skill iconJenkins
+7 more

MTX Group Inc. is seeking a motivated Lead DevOps Engineer to join our team. MTX Group Inc. is a global implementation partner enabling organizations to become fit enterprises. MTX provides expertise across various platforms and technologies, including Google Cloud, Salesforce, artificial intelligence/machine learning, data integration, data governance, data quality, analytics, visualization and mobile technology. MTX’s very own Artificial Intelligence platform Maverick, enables clients to accelerate processes and critical decisions by leveraging a Cognitive Decision Engine, a collection of purpose-built Artificial Neural Networks designed to leverage the power of Machine Learning. The Maverick Platform includes Smart Asset Detection and Monitoring, Chatbot Services, Document Verification, to name a few.


Responsibilities:

  • Be responsible for software releases, configuration, monitoring and support of production system components and infrastructure.
  • Troubleshoot technical or functional issues in a complex environment to provide timely resolution, with various applications and platforms that are global.
  • Bring experience on Google Cloud Platform.
  • Write scripts and automation tools in languages such as Bash/Python/Ruby/Golang.
  • Configure and manage data sources like PostgreSQL, MySQL, Mongo, Elasticsearch, Redis, Cassandra, Hadoop, etc
  • Build automation and tooling around Google Cloud Platform using technologies such as Anthos, Kubernetes, Terraform, Google Deployment Manager, Helm, Cloud Build etc.
  • Bring a passion to stay on top of DevOps trends, experiment with and learn new CI/CD technologies.
  • Work with users to understand and gather their needs in our catalogue. Then participate in the required developments
  • Manage several streams of work concurrently
  • Understand how various systems work
  • Understand how IT operations are managed


What you will bring:

  • 5 years of work experience as a DevOps Engineer.
  • Must possess ample knowledge and experience in system automation, deployment, and implementation.
  • Must possess experience in using Linux, Jenkins, and ample experience in configuring and automating the monitoring tools.
  • Experience in the software development process and tools and languages like SaaS, Python, Java, MongoDB, Shell scripting, Python, MySQL, and Git. 
  • Knowledge in handling distributed data systems. Examples: Elasticsearch, Cassandra, Hadoop, and others.

What we offer:


  • Group Medical Insurance (Family Floater Plan - Self + Spouse + 2 Dependent Children)
    • Sum Insured: INR 5,00,000/- 
    • Maternity cover upto two children
    • Inclusive of COVID-19 Coverage
    • Cashless & Reimbursement facility
    • Access to free online doctor consultation

  • Personal Accident Policy (Disability Insurance) -
  • Sum Insured: INR. 25,00,000/- Per Employee
  • Accidental Death and Permanent Total Disability is covered up to 100% of Sum Insured
  • Permanent Partial Disability is covered as per the scale of benefits decided by the Insurer
  • Temporary Total Disability is covered

  • An option of Paytm Food Wallet (up to Rs. 2500) as a tax saver  benefit
  • Monthly Internet Reimbursement of upto Rs. 1,000 
  • Opportunity to pursue Executive Programs/ courses at top universities globally
  • Professional Development opportunities through various MTX sponsored certifications on multiple technology stacks including Salesforce, Google Cloud, Amazon & others

                                                       *******************

Read more
Pramata Knowledge Solutions
Seena Narayanan
Posted by Seena Narayanan
Bengaluru (Bangalore)
3 - 7 yrs
₹8L - ₹16L / yr
DevOps
Automation
skill iconProgramming
Linux/Unix
Software deployment
+7 more
Job Title: DevOps Engineer Work Experience: 3-7 years Qualification: B.E / M. Tech Location: Bangalore, India About Pramata Pramata’s unique, industry-proven offering combines the digitization of critical customer data currently locked in unstructured and obscure sources, then converts that data into high-quality, actionable information accessible through one or multiple applications through the Pramata cloud-based customer digitization platform. Pramata’s customers are some of the largest companies in the world including CenturyLink, Comcast Business, FICO, HPE, Microsoft, NCR, Novelis, and Truven Health IBM. Pramata has helped these companies and more find millions of dollars in revenue, ensure regulatory and pricing compliance, as well as enable risk identification and management across their customer, partner, and even supplier bases. Pramata is headquartered near San Francisco, California and has its Product Engineering and Solutions Delivery Center in Bangalore, India. How Pramata Works Pramata extracts essential intelligence about customer relationships from complex, negotiated contracts, simplifies it from legalese into plain English, synthesizes it with data from CRM, CLM, billing and other systems, and delivers it in the context of a particular user’s role and responsibilities. This is done through Pramata’s unique Digitization-as-a-Service (DaaS) process which transforms unstructured and diverse data into accurate, timely and meaningful digital information stored in the Pramata Digital Intelligence Hub. The Hub keeps the information centralized as one single, shared source of truth along with ensuring that this data remains consistent, accessible and highly secure. The opportunity - What you get to do You will be instrumental in bringing automation to development and testing pipelines, release management, configuration management, environment & application management and day-to-day support of development teams. You will manage the development of capabilities to achieve higher automation, quality and performance in automated build and deployment management, release management, on-demand environment configuration & automation, configuration and change management and in production environment support - Application monitoring, performance management and production support of mission-critical applications including application and system uptime and remote diagnostics - Security - Ensure that the highly sensitive data from our customers is secure at all times. - Instrument applications for performance baselines and to aid rapid diagnostics and resolution in case of system issues. - High availability and disaster recover - Build and maintain systems that are designed to provide 99.9% uptime and ensure that disaster recovery mechanisms are in place. - Automate provisioning and integration tasks as required to deploy new code. - Monitoring - Proactive steps to monitor complex interdependent systems to ensure that issues are being identified and addressed in real-time. Skills required: - Excellent communicator with great interpersonal skills, driving clarity about the intricate systems - Come with hands-on experience in application infrastructure technologies like Linux(RHEL), MySQL, Apache, Nginx, Phusion passenger, Redis etc. - Good understanding of software application builds, configuration management and deployments - Strong scripting skills like Shell, Ruby, Python, Perl etc. Comes with passion for automation - Comfortable with collaboration, open communication and reaching across functional borders. - Advanced problem-solving and task break-down ability. Additional Skills (Good to have but not mandatory): - In depth understanding and experience working with any Cloud Platforms (e.g: AWS, Azure, Google cloud etc) - Experience using configuration management tools like Chef, Puppet, Capistrano, Ansible, etc. - Being able to work under pressure and solve problems using an analytical approach; decisive, fast moving; and a positive attitude. Minimum Qualifications: - Bachelor’s Degree in Computer Science or a related field - Background in technology operations for Linux based applications with 2-4 years of experience in enterprise software - Strong programming skills in Python, Shell or Java - Experience with one or more of the following Configuration Management Tools: Ansible, Chef, Salt, Puppet - Experience with one or more of the following Databases: PostgreSQL, MySQL, Oracle, RDS
Read more
Directi
at Directi
13 recruiters
Shilpa L
Posted by Shilpa L
Bengaluru (Bangalore), Mumbai
3 - 10 yrs
₹15L - ₹35L / yr
DevOps
skill iconAmazon Web Services (AWS)
automation tools
Enterprise solutions team is responsible for end to end developments, enhancements, customization and integrations of Zeta products and solutions based on business requirements. What is the Job like? * Architecting solutions and drive the technical aspects for the projects * Breaking down complex requirements into simpler stories * Working with various stakeholders and helping convert requirements into code * Managing, mentoring and reviewing engineers for their technical contribution * Participating actively in hiring and nurturing of talent Who should apply? Bachelor’s/Master’s degree in engineering (computer science, information systems) 10+ years of experience building enterprise systems including at least 2 years of direct people management experience Worked on large scale java / JSP applications with good understanding of web stack Good understanding of nuances of distributed systems Good understanding of relational databases (preferred - MySQL / PostgreSQL) Good understanding of reporting/BI systems (preferred - Crystal, Jasper) Worked with IaaS like AWS / GCP / Azure etc.. Worked with Message Brokers and Application Containers analyse, design and architect, develop and maintain software solutions across multiple projects direct and provide ongoing leadership for a team of individual contributors, set objectives, review performances, define growth plan and nurture. drive best practices, and is a pro with agile methodologies / practices - SCRUM, Test Driven Development (TDD) manage headcount, deliverables, schedules across ongoing projects, ensure that resources are appropriately allocated and timelines are met in accordance with the project roadmaps. Zeta is a revolutionary fintech start up making strides in the world of employee benefits & rewards, cafeteria management and digital payments. Zeta (bootstrapped fin-tech startup) is part of the Directi group, a prestigious tech conglomerate with a 17-year-long history and 25 software products in the market. The group has churned out successful mass market businesses like Media.net, Flock, Ringo, Radix, Skenzo and Codechef, without any external funding. Here’s what we’ve built so far: 1. Zeta Optima: Fully-digitized employee tax-benefits programme that helps employees save over 80K in taxes and helps organizations save up 90% of their time and resources 2. Zeta Express: A corporate cafeteria solution that makes cafeterias automated and completely cashless 3. Zeta Super Card: An advanced card-based payment solution that is 10x more secure than bank-issued cards 4. Zeta Spotlight: A digitized rewards, recognition and gifting solution that is easy to distribute and easy to spend
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos