Cutshort logo
Mactores Cognition Private Limited's logo

AWS DevOps Engineer

Charul Joshi's profile picture
Posted by Charul Joshi
2 - 15 yrs
₹8L - ₹40L / yr
Remote only
Skills
skill iconAmazon Web Services (AWS)
Terraform
skill iconDocker
skill iconKubernetes
DevOps
Ansible
skill iconJenkins

Mactores is a trusted leader among businesses in providing modern data platform solutions. Since 2008, Mactores have been enabling businesses to accelerate their value through automation by providing End-to-End Data Solutions that are automated, agile, and secure. We collaborate with customers to strategize, navigate, and accelerate an ideal path forward with a digital transformation via assessments, migration, or modernization.


We are looking for a DevOps Engineer with expertise in infrastructure as a code, configuration management, continuous integration, continuous deployment, automated monitoring for big data workloads, large enterprise applications, customer applications, and databases.


You will have hands-on technology expertise coupled with a background in professional services and client-facing skills. You are passionate about the best practices of cloud deployment and ensuring the customer expectation is set and met appropriately. If you love to solve problems using your skills, then join the Team Mactores. We have a casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself.


What you will do?

  • Automate infrastructure creation with Terraform, AWS Cloud Formation
  • Perform application configuration management, and application-deployment tool enabling infrastructure as code.
  • Take ownership of the Build and release cycle of the customer project.
  • Share the responsibility for deploying releases and conducting other operations maintenance.
  • Enhance operations infrastructures such as Jenkins clusters, Bitbucket, monitoring tools (Consul), and metrics tools such as Graphite and Grafana.
  • Provide operational support for the rest of the Engineering team help migrate our remaining dedicated hardware infrastructure to the cloud.
  • Establish and maintain operational best practices.
  • Participate in hiring culturally fit engineers in the organization, help engineers make their career paths by consulting with them.
  • Design the team strategy in collaboration with founders of the organization.

What are we looking for?

  • 4+ years of experience in using Terraform for IaaC
  • 4+ years of configuration management and engineering for large-scale customers, ideally supporting an Agile development process.
  • 4+ years of Linux or Windows Administration experience.
  • 4+ years of version control systems (git), including branching and merging strategies.
  • 2+ Experience in working with AWS Infrastructure, and platform services.
  • 2+ Experience in cloud automation tools (Ansible, Chef).
  • Exposure to working on container services like Kubernetes on AWS, ECS, and EKS
  • You are extremely proactive at identifying ways to improve things and to make them more reliable.

You will be preferred if

  • Expertise in multiple cloud services provider: Amazon Web Services, Microsoft Azure, Google Cloud Platform
  • AWS Solutions Architect Professional or Associate Level Certificate
  • AWS DevOps Professional Certificate

Life at Mactores


We care about creating a culture that makes a real difference in the lives of every Mactorian. Our 10 Core Leadership Principles that honor Decision-making, Leadership, Collaboration, and Curiosity drive how we work.


1. Be one step ahead

2. Deliver the best

3. Be bold

4. Pay attention to the detail

5. Enjoy the challenge

6. Be curious and take action

7. Take leadership

8. Own it

9. Deliver value

10. Be collaborative


We would like you to read more details about the work culture on https://mactores.com/careers 


The Path to Joining the Mactores Team

At Mactores, our recruitment process is structured around three distinct stages:


Pre-Employment Assessment: 

You will be invited to participate in a series of pre-employment evaluations to assess your technical proficiency and suitability for the role.


Managerial Interview: The hiring manager will engage with you in multiple discussions, lasting anywhere from 30 minutes to an hour, to assess your technical skills, hands-on experience, leadership potential, and communication abilities.


HR Discussion: During this 30-minute session, you'll have the opportunity to discuss the offer and next steps with a member of the HR team.


At Mactores, we are committed to providing equal opportunities in all of our employment practices, and we do not discriminate based on race, religion, gender, national origin, age, disability, marital status, military status, genetic information, or any other category protected by federal, state, and local laws. This policy extends to all aspects of the employment relationship, including recruitment, compensation, promotions, transfers, disciplinary action, layoff, training, and social and recreational programs. All employment decisions will be made in compliance with these principles.

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About Mactores Cognition Private Limited

Founded :
2008
Type :
Products & Services
Size :
20-100
Stage :
Bootstrapped

About

Mactores is a global technology consulting and product company with focus on delivering solutions on Cloud, Big Data, Deep Analytics, DevOps, IoT & AI

Read more

Company social profiles

linkedintwitterfacebook

Similar jobs

NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Hyderabad
4 - 10 yrs
₹12L - ₹24L / yr
DevOps
skill iconPython
Ansible
skill iconDocker
skill iconKubernetes
+4 more

Job Role : DevOps Engineer (Python + DevOps)

Experience : 4 to 10 Years

Location : Hyderabad

Work Mode : Hybrid

Mandatory Skills : Python, Ansible, Docker, Kubernetes, CI/CD, Cloud (AWS/Azure/GCP)


Job Description :

We are looking for a skilled DevOps Engineer with expertise in Python, Ansible, Docker, and Kubernetes.

The ideal candidate will have hands-on experience automating deployments, managing containerized applications, and ensuring infrastructure reliability.


Key Responsibilities :

  • Design and manage containerization and orchestration using Docker & Kubernetes.
  • Automate deployments and infrastructure tasks using Ansible & Python.
  • Build and maintain CI/CD pipelines for streamlined software delivery.
  • Collaborate with development teams to integrate DevOps best practices.
  • Monitor, troubleshoot, and optimize system performance.
  • Enforce security best practices in containerized environments.
  • Provide operational support and contribute to continuous improvements.

Required Qualifications :

  • Bachelor’s in Computer Science/IT or related field.
  • 4+ years of DevOps experience.
  • Proficiency in Python and Ansible.
  • Expertise in Docker and Kubernetes.
  • Hands-on experience with CI/CD tools and pipelines.
  • Experience with at least one cloud provider (AWS, Azure, or GCP).
  • Strong analytical, communication, and collaboration skills.

Preferred Qualifications :

  • Experience with Infrastructure-as-Code tools like Terraform.
  • Familiarity with monitoring/logging tools like Prometheus, Grafana, or ELK.
  • Understanding of Agile/Scrum practices.
Read more
Codnatives
Agency job
via VY SYSTEMS PRIVATE LIMITED by Ajeethkumar s
Bengaluru (Bangalore)
8 - 11 yrs
₹5L - ₹16L / yr
DevOps
Terraform
cicd
skill iconKubernetes

∙Need 8+ years of experience in Devops CICD 

∙Managing large-scale AWS deployments using Infrastructure as Code (IaC) and k8s developer tools 

∙Managing build/test/deployment of very large-scale systems, bridging between developers and live stacks 

∙Actively troubleshoot issues that arise during development and production 

∙Owning, learning, and deploying SW in support of customer-facing applications 

∙Help establish DevOps best practices 

∙Actively work to reduce system costs 

∙Work with open-source technologies, helping to ensure robustness and secureness of said technologies 

∙Actively work with CI/CD, GIT and other component parts of the build and deployment system 

∙Leading skills with AWS cloud stack 

∙Proven implementation experience with Infrastructure as Code (Terraform, Terragrunt, Flux, Helm charts) 

at scale 

∙Proven experience with Kubernetes at scale 

∙Proven experience with cloud management tools beyond AWS console (k9s, lens) 

∙Strong communicator who people want to work with – must be thought of as the ultimate collaborator 

∙Solid team player 

∙Strong experience with Linux-based infrastructures and AWS 

∙Strong experience with databases such as MySQL, Redshift, Elasticsearch, Mongo, and others 

∙Strong knowledge of JavaScript, GIT 

∙Agile practitioner 

Read more
A software for haulers by haulers - CW
A software for haulers by haulers - CW
Agency job
via Qrata by Rayal Rajan
Mumbai, Bengaluru (Bangalore)
4 - 8 yrs
₹20L - ₹40L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
skill iconPython
+1 more
Role : Senior Devops Engineer

What the role needs

● Review of current DevOps infrastructure & redefine code merging strategy as per product roll out objectives
● Define deploy frequency strategy based on product roadmap document and ongoing product market fit relate tweaks and changes
● Architect benchmark docker configurations based on planned stack
● Establish uniformity of environment across developer machine to multiple production environments
● Plan & execute test automation infrastructure
● Setup automated stress testing environment
● Plan and execute logging & stack trace tools
● Review DevOps orchestration tools & choices
● Coordination with external data centers and AWS in the event of provisioning, outages or maintenance.

Requirements

● Extensive experience with AWS cloud infrastructure deployment and monitoring
● Advanced knowledge of programming languages such as Python and golang, and writing code and scripts
● Experience with Infrastructure as code & devops management tools - Terraform, Packer for devops asset management for monitoring, infrastructure cost estimations, and Infrastructure version management
● Configure and manage data sources like MySQL, MongoDB, Elasticsearch, Redis, Cassandra, Hadoop, etc
● Experience with network, infrastructure and OWASP security standards
● Experience with web server configurations - Nginx, HAProxy, SSL configurations with AWS, understanding & management of sub-domain based product rollout for clients .
● Experience with deployment and monitoring of event streaming & distributing technologies and tools - Kafka, RabbitMQ, NATS.io, socket.io
● Understanding & experience of Disaster Recovery Plan execution
● Working with other senior team members to devise and execute strategies for data backup and storage
● Be aware of current CVEs, potential attack vectors, and vulnerabilities, and apply patches as soon as possible
● Handle incident responses, troubleshooting and fixes for various services
Read more
ZeMoSo Technologies
at ZeMoSo Technologies
11 recruiters
HR Team
Posted by HR Team
Remote only
4 - 8 yrs
₹10L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Looking out for GCP Devop's Engineer who can join Immediately or within 15 days

 

Job Summary & Responsibilities:

 

Job Overview:

 

You will work in engineering and development teams to integrate and develop cloud solutions and virtualized deployment of software as a service product. This will require understanding the software system architecture function as well as performance and security requirements. The DevOps Engineer is also expected to have expertise in available cloud solutions and services, administration of virtual machine clusters, performance tuning and configuration of cloud computing resources, the configuration of security, scripting and automation of monitoring functions. This position requires the deployment and management of multiple virtual clusters and working with compliance organizations to support security audits. The design and selection of cloud computing solutions that are reliable, robust, extensible, and easy to migrate are also important.

 

Experience:

 

Experience working on billing and budgets for a GCP project - MUST

 

Experience working on optimizations on GCP based on vendor recommendations - NICE TO HAVE

 

Experience in implementing the recommendations on GCP

 

Architect Certifications on GCP - MUST

 

Excellent communication skills (both verbal & written) - MUST

 

Excellent documentation skills on processes and steps and instructions- MUST

 

At least 2 years of experience on GCP.

 

 

Basic Qualifications:

● Bachelor’s/Master’s Degree in Engineering OR Equivalent.

 

● Extensive scripting or programming experience (Shell Script, Python).

 

● Extensive experience working with CI/CD (e.g. Jenkins).

 

● Extensive experience working with GCP, Azure, or Cloud Foundry.

 

● Experience working with databases (PostgreSQL, elastic search).

 

● Must have 2 years of minimum experience with GCP certification.

 

 

Benefits :

● Competitive salary.

 

● Work from anywhere.

 

● Learning and gaining experience rapidly.

 

● Reimbursement for basic working set up at home.

 

● Insurance (including top-up insurance for COVID).

 

Location :

Remote - work from anywhere.

Read more
LogiNext
at LogiNext
1 video
7 recruiters
Rakhi Daga
Posted by Rakhi Daga
Mumbai
8 - 10 yrs
₹1L - ₹1L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

LogiNext is looking for a technically savvy and passionate Principal DevOps Engineer or Senior Database Administrator to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust infrastructure.

You have hands-on experience in building secure, high-performing and scalable infrastructure. You have experience to automate and streamline the development operations and processes. You are a master in troubleshooting and resolving issues in dev, staging and production environments.


Responsibilities:


Design and implement scalable infrastructure for delivering and running web, mobile and big data applications on cloud Scale and optimise a variety of SQL and NoSQL databases (especially MongoDB), web servers, application frameworks, caches, and distributed messaging systems Automate the deployment and configuration of the virtualized infrastructure and the entire software stack Plan, implement and maintain robust backup and restoration policies ensuring low RTO and RPO Support several Linux servers running our SaaS platform stack on AWS, Azure, IBM Cloud, Ali Cloud Define and build processes to identify performance bottlenecks and scaling pitfalls Manage robust monitoring and alerting infrastructure Explore new tools to improve development operations to automate daily tasks Ensure High Availability and Auto-failover with minimum or no manual interventions


Requirements:


Bachelor’s degree in Computer Science, Information Technology or a related field 8 to 10 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure Strong background in Linux/Unix Administration and Python/Shell Scripting Extensive experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure Experience in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms Experience in query analysis, peformance tuning, database redesigning, Experience in enterprise application development, maintenance and operations Knowledge of best practices and IT operations in an always-up, always-available service Excellent written and oral communication skills, judgment and decision-making skills

Read more
LogiNext
at LogiNext
1 video
7 recruiters
Rakhi Daga
Posted by Rakhi Daga
Mumbai
2 - 4 yrs
₹6L - ₹11L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more

LogiNext is looking for a technically savvy and passionate DevOps Engineer to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust and scalable infrastructure.

You have hands-on experience in building secure, high-performing and scalable infrastructure. You have experience to automate and streamline development operations and processes. You are a master in troubleshooting and resolving issues in non-production and production environments.

Responsibilities:

Design and implement scalable infrastructure for delivering and running web, mobile and big data applications on cloud Scale and optimise a variety of SQL and NoSQL databases, web servers, application frameworks, caches, and distributed messaging systems Automate the deployment and configuration of the virtualized infrastructure and the entire software stack Support several Linux servers running our SaaS platform stack on AWS, Azure, GCP Define and build processes to identify performance bottlenecks and scaling pitfalls Manage robust monitoring and alerting infrastructure Explore new tools to improve development operations


Requirements:

Bachelor’s degree in Computer Science, Information Technology or a related field 2 to 4 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure Strong background in Linux/Unix Administration and Python/Shell Scripting Extensive experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure Experience in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms Experience in enterprise application development, maintenance and operations Knowledge of best practices and IT operations in an always-up, always-available service Excellent written and oral communication skills, judgment and decision-making skills

Read more
Bito Inc
at Bito Inc
2 recruiters
Amrit Dash
Posted by Amrit Dash
Remote only
5 - 8 yrs
Best in industry
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Microsoft Windows Azure
Ansible
Chef
+7 more

Bito is a startup that is using AI (ChatGPT, OpenAI, etc) to create game-changing productivity experiences for software developers in their IDE and CLI. Already, over 100,000 developers are using Bito to increase their productivity by 31% and performing more than 1 million AI requests per week.

 

Our founders have previously started, built, and taken a company public (NASDAQ: PUBM), worth well over $1B. We are looking to take our learnings, learn a lot along with you, and do something more exciting this time. This journey will be incredibly rewarding, and is incredibly difficult!

 

We are building this company with a fully remote approach, with our main teams for time zone management in the US and in India. The founders happen to be in Silicon Valley and India.

 

We are hiring a DevOps Engineer to join our team.

 

Responsibilities:

  • Collaborate with the development team to design, develop, and implement Java-based applications
  • Perform analysis and provide recommendations for Cloud deployments and identify opportunities for efficiency and cost reduction
  • Build and maintain clusters for various technologies such as Aerospike, Elasticsearch, RDS, Hadoop, etc
  • Develop and maintain continuous integration (CI) and continuous delivery (CD) frameworks
  • Provide architectural design and practical guidance to software development teams to improve resilience, efficiency, performance, and costs
  • Evaluate and define/modify configuration management strategies and processes using Ansible
  • Collaborate with DevOps engineers to coordinate work efforts and enhance team efficiency
  • Take on leadership responsibilities to influence the direction, schedule, and prioritization of the automation effort

Requirements:

  • Minimum 4+ years of relevant work experience in a DevOps role
  • At least 3+ years of experience in designing and implementing infrastructure as code within the AWS/GCP/Azure ecosystem
  • Expert knowledge of any cloud core services, big data managed services, Ansible, Docker, Terraform/CloudFormation, Amazon ECS/Kubernetes, Jenkins, and Nginx
  • Expert proficiency in at least two scripting/programming languages such as Bash, Perl, Python, Go, Ruby, etc.
  • Mastery in configuration automation tool sets such as Ansible, Chef, etc
  • Proficiency with Jira, Confluence, and Git toolset
  • Experience with automation tools for monitoring and alerts such as Nagios, Grafana, Graphite, Cloudwatch, New Relic, etc
  • Proven ability to manage and prioritize multiple diverse projects simultaneously

What do we offer: 

At Bito, we strive to create a supportive and rewarding work environment that enables our employees to thrive. Join a dynamic team at the forefront of generative AI technology. 

·               Work from anywhere 

·               Flexible work timings 

·               Competitive compensation, including stock options 

·               A chance to work in the exciting generative AI space 

·               Quarterly team offsite events

Read more
HOP Financial Services
at HOP Financial Services
2 recruiters
Shreya Dubey
Posted by Shreya Dubey
Bengaluru (Bangalore)
3 - 4 yrs
₹8L - ₹11L / yr
skill iconKubernetes
skill iconDocker
DevOps
skill iconJenkins
Chef
+1 more

About Hop:


We are a London, UK based FinTech startup with a subsidiary in India. Hop is working towards building the next generation digital banking platform for seamless and economical currency exchange, with technology at the crux of it. In a technology driven era, many financial services platforms still lack the customer experience and are cumbersome to use. Hop aims at building a ‘state of the art’ tech-centric, customer focused solution.


moneyHOP is India’s first cross-border neo-bank providing millennials the ability to ‘Send’ & ‘Spend’ conveniently and economically across the globe using HOPRemit (An online remittance portal) and HOP app + Card (A multi-currency bank account).


This position is a crucially important position in the firm and the person hired will have the liberty to drive the product and provide direction in line with business needs.


Website: https://moneyhop.co/">https://moneyhop.co/



About Individual

 

Looking for an enthusiastic individual who is passionate about technology and has worked with either a start-up or a blue-chip firm in the past.

 

The candidate needs to be a multi-tasker, highly self-motivated, self-starter and have the ability to work in a high stress environment. He/she should be tech savvy and willing to embrace new technology comfortably.

 

Ideally, the candidate should have experience working with the technology stack in the scalable and high growth mobile application software.

 

General Skills

 

  • 3-4 years of experience in DevOps.
  • Bachelor's degree in Computer Science, Information Science, or equivalent practical experience.
  • Exposure to Behaviour Driven Development and experience in programming and testing.
  • Excellent verbal and written communication skills.
  • Good time management and organizational skills.
  • Dependability
  • Accountability and Ownership
  • Right attitude and growth mindset
  • Trust-worthiness
  • Ability to embrace new technologies
  • Ability to get work done
  • Should have excellent analytical and troubleshooting skills.

Technical Skills

 

  • Work with developer teams with a focus on automating build and deployment using tools such as Jenkins.
  • Implement CI/CD in projects (GitLabCI preferred).
  • Enable software build and deploy.
  • Provisioning both day to day operations and automation using tools, e. g. Ansible, Bash.
  • Write, plan, create infra as a code using Terraform.
  • Monitoring, ITSM automation incident creation from alerts using licensed and open source tools.
  • Manage credentials for AWS cloud servers, github repos, Atlassian Cloud services, Jenkins, OpenVPN, and the developers environment.
  • Building environments for unit tests, integration tests, system tests, and acceptance tests using Jenkins.
  • Create and spin off resource instances.
  • Experience implementing CI/CD.
  • Experience with infrastructure automation solutions (Ansible, Chef, Puppet, etc. ).
  • Experience with AWS.
  • Should have expert Linux and Network administration skills to troubleshoot and trace symptoms back to the root cause.
  • Knowledge of application clustering / load balancing concepts and technologies.
  • Demonstrated ability to think strategically about developing solution strategies, and deliver results.
  • Good understanding of design of native Cloud applications Cloud application design patterns and practices in AWS.

Day-to-Day requirements


  • Work with the developer team to enhance the existing CI/CD pipeline.
  • Adopt industry best practices to set up a UAT and prod environment for scalability.
  • Manage the AWS resources including IAM users, access control, billing etc.
  • Work with the test automation engineer to establish a CI/CD pipeline.
  • Work on replication of environments easy to implement.
  • Enable efficient software deployment.
Read more
BlueCloud
at BlueCloud
1 recruiter
Gunjan G
Posted by Gunjan G
Pune
2 - 4 yrs
₹5L - ₹8L / yr
skill iconKubernetes
DevOps
skill iconDocker
skill iconAmazon Web Services (AWS)
skill iconJenkins
+5 more
Role and Responsibilities:
- Solve complex Cloud Infrastructure problems.
- Drive DevOps culture in the organization by working with engineering and product teams.
- Be a trusted technical advisor to developers and help them architect scalable, robust, and highly-available systems.
- Frequently collaborate with developers to help them learn how to run and maintain systems in production.
- Drive a culture of CI/CD. Find bottlenecks in the software delivery pipeline. Fix bottlenecks with developers to help them deliver working software faster. Develop and maintain infrastructure solutions for automation, alerting, monitoring, and agility.
- Evaluate cutting edge technologies and build PoCs, feasibility reports, and implementation strategies.
- Work with engineering teams to identify and remove infrastructure bottlenecks enabling them to move fast. (In simple words you'll be a bridge between tech, operations & product)

Skills required:

Must have:
- Deep understanding of open source DevOps tools.
- Scripting experience in one or more among Python, Shell, Go, etc.
- Strong experience with AWS (EC2, S3, VPC, Security, Lambda, Cloud Formation, SQS, etc)
- Knowledge of distributed system deployment.
- Deployed and Orchestrated applications with Kubernetes.
- Implemented CI/CD for multiple applications.
- Setup monitoring and alert systems for services using ELK stack or similar.
- Knowledge of Ansible, Jenkins, Nginx.
- Worked with Queue based systems.
- Implemented batch jobs and automated recurring tasks.
- Implemented caching infrastructure and policies.
- Implemented central logging.

Good to have:
- Experience dealing with PI information security.
- Experience conducting internal Audits and assisting External Audits.
- Experience implementing solutions on-premise.
- Experience with blockchain.
- Experience with Private Cloud setup.

Required Experience:
- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience.
- You need to have 2-4 years of DevOps & Automation experience.
- Need to have a deep understanding of AWS.
- Need to be an expert with Git or similar version control systems.
- Deep understanding of at least one open-source distributed systems (Kafka, Redis, etc)
- Ownership attitude is a must.

What’s attractive about us?

We offer a suite of memberships and subscriptions to spice up your lifestyle. We believe in practicing an ultimate work life balance and satisfaction. Working hard doesn’t mean clocking in extra hours, it means having a zeal to contribute the best of your talents. Our people culture helps us inculcate measures and benefits which help you feel confident and happy each and every day. Whether you’d like to skill up, go off the grid, attend your favourite events or be an epitome of fitness. We have you covered round and about.
  • Health Memberships 
  • Sports Subscriptions 
  • Entertainment Subscriptions 
  • Key Conferences and Event Passes
  • Learning Stipend 
  • Team Lunches and Parties 
  • Travel Reimbursements 
  • ESOPs 

Thats what we think would bloom up your personal life, as a gesture for helping us with your talents.

Join us to be a part of our Exciting journey to Build one Digital Identity Platform!!!
Read more
Pramata Knowledge Solutions
Seena Narayanan
Posted by Seena Narayanan
Bengaluru (Bangalore)
3 - 7 yrs
₹8L - ₹16L / yr
DevOps
Automation
skill iconProgramming
Linux/Unix
Software deployment
+7 more
Job Title: DevOps Engineer Work Experience: 3-7 years Qualification: B.E / M. Tech Location: Bangalore, India About Pramata Pramata’s unique, industry-proven offering combines the digitization of critical customer data currently locked in unstructured and obscure sources, then converts that data into high-quality, actionable information accessible through one or multiple applications through the Pramata cloud-based customer digitization platform. Pramata’s customers are some of the largest companies in the world including CenturyLink, Comcast Business, FICO, HPE, Microsoft, NCR, Novelis, and Truven Health IBM. Pramata has helped these companies and more find millions of dollars in revenue, ensure regulatory and pricing compliance, as well as enable risk identification and management across their customer, partner, and even supplier bases. Pramata is headquartered near San Francisco, California and has its Product Engineering and Solutions Delivery Center in Bangalore, India. How Pramata Works Pramata extracts essential intelligence about customer relationships from complex, negotiated contracts, simplifies it from legalese into plain English, synthesizes it with data from CRM, CLM, billing and other systems, and delivers it in the context of a particular user’s role and responsibilities. This is done through Pramata’s unique Digitization-as-a-Service (DaaS) process which transforms unstructured and diverse data into accurate, timely and meaningful digital information stored in the Pramata Digital Intelligence Hub. The Hub keeps the information centralized as one single, shared source of truth along with ensuring that this data remains consistent, accessible and highly secure. The opportunity - What you get to do You will be instrumental in bringing automation to development and testing pipelines, release management, configuration management, environment & application management and day-to-day support of development teams. You will manage the development of capabilities to achieve higher automation, quality and performance in automated build and deployment management, release management, on-demand environment configuration & automation, configuration and change management and in production environment support - Application monitoring, performance management and production support of mission-critical applications including application and system uptime and remote diagnostics - Security - Ensure that the highly sensitive data from our customers is secure at all times. - Instrument applications for performance baselines and to aid rapid diagnostics and resolution in case of system issues. - High availability and disaster recover - Build and maintain systems that are designed to provide 99.9% uptime and ensure that disaster recovery mechanisms are in place. - Automate provisioning and integration tasks as required to deploy new code. - Monitoring - Proactive steps to monitor complex interdependent systems to ensure that issues are being identified and addressed in real-time. Skills required: - Excellent communicator with great interpersonal skills, driving clarity about the intricate systems - Come with hands-on experience in application infrastructure technologies like Linux(RHEL), MySQL, Apache, Nginx, Phusion passenger, Redis etc. - Good understanding of software application builds, configuration management and deployments - Strong scripting skills like Shell, Ruby, Python, Perl etc. Comes with passion for automation - Comfortable with collaboration, open communication and reaching across functional borders. - Advanced problem-solving and task break-down ability. Additional Skills (Good to have but not mandatory): - In depth understanding and experience working with any Cloud Platforms (e.g: AWS, Azure, Google cloud etc) - Experience using configuration management tools like Chef, Puppet, Capistrano, Ansible, etc. - Being able to work under pressure and solve problems using an analytical approach; decisive, fast moving; and a positive attitude. Minimum Qualifications: - Bachelor’s Degree in Computer Science or a related field - Background in technology operations for Linux based applications with 2-4 years of experience in enterprise software - Strong programming skills in Python, Shell or Java - Experience with one or more of the following Configuration Management Tools: Ansible, Chef, Salt, Puppet - Experience with one or more of the following Databases: PostgreSQL, MySQL, Oracle, RDS
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos