Cutshort logo
TeamCity Jobs in Bangalore (Bengaluru)

11+ TeamCity Jobs in Bangalore (Bengaluru) | TeamCity Job openings in Bangalore (Bengaluru)

Apply to 11+ TeamCity Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest TeamCity Job opportunities across top companies like Google, Amazon & Adobe.

icon
Semperfi Solution

at Semperfi Solution

1 recruiter
Ambika Jituri
Posted by Ambika Jituri
Bengaluru (Bangalore)
8 - 9 yrs
₹20L - ₹25L / yr
DevOps
CI/CD
skill iconDocker
skill iconJenkins
Terraform
+3 more
Google Cloud Devops Engineer (Terraform/CI/CD Pipeline)
Exp:8 to 10 years notice periods 0 to 20 days

Job Description :

- Provision Gcp Resources Based On The Architecture Design And Features Aligned With Business Objectives

- Monitor Resource Availability, Usage Metrics And Provide Guidelines For Cost And Performance Optimization

- Assist It/Business Users Resolving Gcp Service Related Issues

- Provide Guidelines For Cluster Automation And Migration Approaches And Techniques Including Ingest, Store, Process, Analyse And Explore/Visualise Data.

- Provision Gcp Resources For Data Engineering And Data Science Projects.

- Assistance With Automated Data Ingestion, Data Migration And Transformation(Good To Have)

- Assistance With Deployment And Troubleshooting Applications In Kubernetes.

- Establish Connections And Credibility In How To Address The Business Needs Via Design And Operate Cloud-Based Data Solutions

Key Responsibilities / Tasks :

- Building complex CI/CD pipelines for cloud native PaaS services such as Databases, Messaging, Storage, Compute in Google Cloud Platform

- Building deployment pipeline with Github CI (Actions)

- Building terraform codes to deploy infrastructure as a code

- Working with deployment and troubleshooting of Docker, GKE, Openshift, and Cloud Run

- Working with Cloud Build, Cloud Composer, and Dataflow

- Configuring software to be monitored by Appdynamics

- Configuring stackdriver logging and monitoring in GCP

- Work with splunk, Kibana, Prometheus and grafana to setup dashboard

Your skills, experience, and qualification :

- Total experience of 5+ Years, in as Devops. Should have at least 4 year of experience in Google could and Github CI.

- Should have strong experience in Microservices/API.

- Should have strong experience in Devops tools like Gitbun CI, teamcity, Jenkins and Helm.

- Should know Application deployment and testing strategies in Google cloud platform.

- Defining and setting development, test, release, update, and support processes for DevOps operation

- Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)

- Excellent understanding of Java

- Knowledge on Kafka, ZooKeeper, Hazelcast, Pub/Sub is nice to have.

- Understanding of cloud networking, security such as software defined networking/firewalls, virtual networks and load balancers.

- Understanding of cloud identity and access

- Understanding of the compute runtime and the differences between native compute, virtual and containers

- Configuration and managing databases such as Oracle, Cloud SQL, and Cloud Spanner.

- Excellent troubleshooting

- Working knowledge of various tools, open-source technologies

- Awareness of critical concepts of Agile principles

- Certification in Google professional Cloud DevOps Engineer is desirable.

- Experience with Agile/SCRUM environment.

- Familiar with Agile Team management tools (JIRA, Confluence)

- Understand and promote Agile values: FROCC (Focus, Respect, Openness, Commitment, Courage)

- Good communication skills

- Pro-active team player

- Comfortable working in multi-disciplinary, self-organized teams

- Professional knowledge of English

- Differentiators : knowledge/experience about
Read more
Biofourmis

at Biofourmis

44 recruiters
Roopa Ramalingamurthy
Posted by Roopa Ramalingamurthy
Remote only
5 - 10 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Job Summary:

We are looking for a highly skilled and experienced DevOps Engineer who will be responsible for the deployment, configuration, and troubleshooting of various infrastructure and application environments. The candidate must have a proficient understanding of CI/CD pipelines, container orchestration, and cloud services, with experience in AWS services like EKS, EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment. The DevOps Engineer will be responsible for monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration, among other tasks. They will also work with application teams on infrastructure design and issues, and architect solutions to optimally meet business needs.


Responsibilities:

  • Deploy, configure, and troubleshoot various infrastructure and application environments
  • Work with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment
  • Monitor, automate, troubleshoot, secure, maintain users, and report on infrastructure and applications
  • Collaborate with application teams on infrastructure design and issues
  • Architect solutions that optimally meet business needs
  • Implement CI/CD pipelines and automate deployment processes
  • Disaster recovery and infrastructure restoration
  • Restore/Recovery operations from backups
  • Automate routine tasks
  • Execute company initiatives in the infrastructure space
  • Expertise with observability tools like ELK, Prometheus, Grafana , Loki


Qualifications:

  • Proficient understanding of CI/CD pipelines, container orchestration, and various cloud services
  • Experience with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc.
  • Experience in monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration
  • Experience in architecting solutions that optimally meet business needs
  • Experience with scripting languages (e.g., Shell, Python) and infrastructure as code (IaC) tools (e.g., Terraform, CloudFormation)
  • Strong understanding of system concepts like high availability, scalability, and redundancy
  • Ability to work with application teams on infrastructure design and issues
  • Excellent problem-solving and troubleshooting skills
  • Experience with automation of routine tasks
  • Good communication and interpersonal skills


Education and Experience:

  • Bachelor's degree in Computer Science or a related field
  • 5 to 10 years of experience as a DevOps Engineer or in a related role
  • Experience with observability tools like ELK, Prometheus, Grafana


Working Conditions:

The DevOps Engineer will work in a fast-paced environment, collaborating with various application teams, stakeholders, and management. They will work both independently and in teams, and they may need to work extended hours or be on call to handle infrastructure emergencies.


Note: This is a remote role. The team member is expected to be in the Bangalore office for one week each quarter.

Read more
Full Service Engineering and R&D based MNC

Full Service Engineering and R&D based MNC

Agency job
via Jobdost by Sathish Kumar
Bengaluru (Bangalore), Hyderabad
5 - 8 yrs
₹10L - ₹17L / yr
skill iconPython
Bash
Powershell
skill iconDocker
skill iconKubernetes
+6 more
Candidates MUST HAVE
  • Experience with Infrastructure-as-Code tools(IaS) like Terraform and Cloud Formation.
  • Proficiency in cloud-native technologies and architectures (Docker/ Kubernetes), Ci/CD pipelines.
  • Good experience in Javascript.
  • Expertise in Linux / Windows environment.
  • Good Experience in Scripting languages like PowerShell / Bash/ Python.
  • Proficiency in revision control and DevOps best practices like Git
Read more
Careator Technologies Pvt Ltd
Bharani Sharma
Posted by Bharani Sharma
Bengaluru (Bangalore)
7 - 11 yrs
₹1L - ₹20L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconJenkins
skill iconAmazon Web Services (AWS)

 

Who you are

 

The possibility of having massive societal impact. Our software touches the lives of hundreds of millions of people.

Solving hard governance and societal challenges

Work directly with central and state government leaders and other dignitaries

Mentorship from world class people and rich ecosystems


Position : Architect or Technical Lead - DevOps

Location : Bangalore

Role:

Strong knowledge on architecture and system design skills for multi-tenant, multi-region, redundant and highly- available mission-critical systems.

Clear understanding of core cloud platform technologies across public and private clouds.

Lead DevOps practices for Continuous Integration and Continuous Deployment pipeline and IT operations practices, scaling, metrics, as well as running day-to-day operations from development to production for the platform.

Implement non-functional requirements needed for a world-class reliability, scale, security and cost-efficiency throughout the product development lifecycle.

Drive a culture of automation, and self-service enablement for developers.

Work with information security leadership and technical team to automate security into the platform and services.

Meet infrastructure SLAs and compliance control points of cloud platforms.

Define and contribute to initiatives to continually improve Solution Delivery processes.

Improve organisation’s capability to build, deliver and scale software as a service on the cloud

Interface with Engineering, Product and Technical Architecture group to meet joint objectives.

You are deeply motivated & have knowledge of solving hard to crack societal challenges with the stamina to see it all the way through.

You must have 7+ years of hands-on experience on DevOps, MSA and Kubernetes platform.

You must have 2+ years of strong kubernetes experience and must have CKA (Certified Kubernetes Administrator).

You should be proficient in tools/technologies involved for a microservices architecture and deployed in a multi cloud kubernetes environment.

You should have hands-on experience in architecting and building CI/CD pipelines from code check-in until production.

You should have strong knowledge on Dockers, Jenkins pipeline scripting, Helm charts, kubernetes objects and manifests, prometheus and demonstrate hands-on knowledge of multi cloud computing, storage and networking.

You should have experience in designing, provisioning and administrating using best practices across multiple public cloud offerings (Azure, AWS, GCP) and/or private cloud offerings ( OpenStack, VMware, Nutanix, NIC, bare metal Infra).

You should have experience in setting up logging, monitoring Cloud application performance, error rates, and error budgets while tracking adherence to SLOs and SLAs.

Read more
Hybrid Cloud Environments

Hybrid Cloud Environments

Agency job
via The Hub by Sridevi Viswanathan
Bengaluru (Bangalore)
5 - 8 yrs
₹10L - ₹12L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
Cloudfront
Installation
What we do? 

We are a boutique IT services & solutions firm headquartered in the Bay Area with offices in India. Our offering includes custom-configured hybrid cloud solutions backed by our managed services. We combine best in class DevOps and IT infrastructure management practices, to manage our clients Hybrid Cloud Environments.
In addition, we build and deploy our private cloud solutions using Open stack to provide our clients with a secure, cost effective and scale able Hybrid Cloud solution. We work with start-ups as well as enterprise clients.
This is an exciting opportunity for an experienced Cloud Engineer to work on exciting projects and have an opportunity to expand their knowledge working on adjacent technologies as well.

Must have skills

• Provisioning skills on IaaS cloud computing for platforms such as AWS, Azure, GCP.

• Strong working experience in AWS space with various AWS services and implementations (i.e. VPCs, SES, EC2, S3, Route 53, Cloud Front, etc.)

• Ability to design solutions based on client requirements.

• Some experience with various network LAN/WAN appliances like (Cisco routers and ASA systems, Barracuda, Meraki, SilverPeak, Palo Alto, Fortinet, etc.)

• Understanding of networked storage like (NFS / SMB / iSCSI / Storage GW / Windows Offline)

• Linux / Windows server installation, maintenance, monitoring, data backup and recovery, security, and administration.

• Good knowledge of TCP/IP protocol & internet technologies.

• Passion for innovation and problem solving, in a start-up environment.

• Good communication skills.

Good to have

• Remote Monitoring & Management.

• Familiarity with Kubernetes and Containers.

• Exposure to DevOps automation scripts & experience with tools like Git, bash scripting, PowerShell, AWS Cloud Formation, Ansible, Chef or Puppet will be a plus.

• Architect / Practitioner certification from OEM with hands-on capabilities.

What you will be working on

• Trouble shoot and handle L2/ L3 tickets.

• Design and architect Enterprise Cloud systems and services.

• Design, Build and Maintain environments primarily in AWS using EC2, S3/Storage, CloudFront, VPC, ELB, Auto Scaling, Direct Connect, Route53, Firewall, etc.

• Build and deploy in GCP/ Azure as needed.

• Architect cloud solutions keeping performance, cost and BCP considerations in mind.

• Plan cloud migration projects as needed.

• Collaborate & work as part of a cohesive team.

• Help build our private cloud offering on Open stack.
Read more
Edtech product company

Edtech product company

Agency job
via The Hub by Sridevi Viswanathan
Bengaluru (Bangalore)
1 - 5 yrs
₹10L - ₹12L / yr
DevOps
skill iconDocker
skill iconKubernetes
skill iconJenkins
Linux/Unix
+5 more

Job Description

We are looking for an experienced software engineer with a strong background in DevOps and handling

traffic & infrastructure at scale.

Responsibilities :

Work closely with product engineers to implement scalable and highly reliable systems.

Scale existing backend systems to handle ever-increasing amounts of traffic and new product

requirements.

Collaborate with other developers to understand & setup tooling needed for - Continuous

Integration/Delivery/Deployment practices.

Build & operate infrastructure to support website, backend cluster, ML projects in the organization.

Monitor and track performance and reliability of our services and software to meet promised SLA

You are the right fit if you have:

1+ years of experience working on distributed systems and shipping high-quality product features on

schedule

Experience with Python including Object Oriented programming

Container administration and development utilizing Kubernetes, Docker, Mesos, or similar

Infrastructure automation through Terraform, Chef, Ansible, Puppet, Packer or similar

Knowledge of cloud compute technologies, network monitoring

Experience with Cloud Orchestration frameworks, development and SRE support of these systems

Experience with CI/CD pipelines including VCS (git, svn, etc), Gitlab Runners, Jenkins

Working with or supporting production, test, and development environments for medium to large user

environments

Installing and configuring application servers and database servers

Experience in developing scripts to automate software deployments and installations

Experience in a 247 high-availability production environmentAbility to come with best solution by capturing big picture instead of focusing on minor details. Root

cause analysis

Mandatory skills: Shell/Bash Scripting, Unix, Linux, Dockers, Kubernetes, AWS, Jenkins, GIT

Read more
Koo App

at Koo App

2 recruiters
Neha Gandhi
Posted by Neha Gandhi
Bengaluru (Bangalore)
3 - 7 yrs
₹15L - ₹30L / yr
DevOps
CI/CD
skill iconKubernetes
skill iconAmazon Web Services (AWS)

Job description

 

Problem Statement-Solution


Only 10% of India speaks English and 90% speak over 25 languages and 1000s of dialects. The internet has largely been in English. A good part of India is now getting internet connectivity thanks to cheap smartphones and Jio. The non-English speaking internet users will balloon to about 600 million users out of the total 750 million internet users in India by 2020. This will make the vernacular segment one of the largest segments in the world - almost 2x the size of the US population. The vernacular segment has very few products that they can use on the internet.

One large human need is that of sharing thoughts and connecting with people of the same community on the basis of language and common interests. Twitter serves this need globally but the experience is mostly in English. There’s a large unaddressed need for these vernacular users to express themselves in their mother tongue and connect with others from their community. Koo is a solution to this problem.


About Koo

Koo was founded in March 2020, as a micro-blogging platform in both Indian languages and English, which gives a voice to the millions of Indians who communicate in Indian languages.

Currently available in Assamese, Bengali, English, Hindi, Kannada, Marathi, Tamil and Telugu, Koo enables people from across India to express themselves online in their mother tongues. In a country where under 10% of the population speaks English as a native language, Koo meets the need for a social media platform that can deliver an immersive language experience to an Indian user, thereby enabling them to connect and interact with each other. The recently introduced ‘Talk to Type’ enables users to leverage the voice assistant to share their thoughts without having to type. In August 2021, Koo crossed 10 million downloads, in just 16 months of launch.

Since June 2021, Koo is available in Nigeria.


  1. Founding Team

Koo is founded by veteran internet entrepreneurs - Aprameya Radhakrishna (CEO, Taxiforsure) and Mayank Bidawatka (Co-founder, Goodbox & Coreteam, redBus).

 

Technology Team & Culture

The technology team comprises sharp coders, technology geeks and guys who have been entrepreneurs or are entrepreneurial and extremely passionate towards technology. Talent is coming from the likes of Google, Walmart, Redbus, Dailyhunt. Anyone being part of a technology team will have a lot to learn from their peers and mentors. Download our android app and take a look at what we’ve built. Technology stack compromises of a wide variety of cutting-edge technologies like Kotlin, Java 15, Reactive Programming, MongoDB, Cassandra, Kubernetes, AWS, NodeJS, Python, ReactJS, Redis, Aerospike, ML, Deep learning etc. We believe in giving a lot of independence and autonomy to ownership-driven individuals.

 

Technology skill sets required for a matching profile 

 

  • Experience between 3 to 7 years in a devops role with mandatory one or more stints at a fast paced startup.
  • Mandatory experience with containers, kubernetes (EKS) (setting up from scratch), istio and microservices.
  • Sound knowledge of technologies like Terraform, Automation Scripts, Cron jobs etc. Must have worked with goals of putting infra as code like setting up new environments with code.
  • Knowledge of industry standards around monitoring, alerting, self healing, high availability, auto scaling etc.
  • Exhaustive experience with various cloud technologies (especially on AWS) like SQS, SNS, Elastic Search, Elastic Cache, Elastic Transcoder, VPC, Subnets, Security groups etc.
  • Must have set up stable CI-CD pipelines with capabilities to do zero downtime deployments on any one of rolling updates, blue green or canary modes.
  • Experience with VPN and LDAP solutions for securely login to infrastructure and proving SSO. 8. Master of deploying and troubleshooting all layers of application from network, frontend, backend and databases (MongoDB, Redis, Postgres, Cassandra, ElasticSearch, Aerospike etc).
Read more
Basik Marketing PVT LTD

at Basik Marketing PVT LTD

2 candid answers
Naveen G
Posted by Naveen G
Bengaluru (Bangalore)
6 - 10 yrs
₹15L - ₹22L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Automation
+4 more

As a DevOps Engineer with experience in Kubernetes, you will be responsible for leading and managing a team of DevOps engineers in the design, implementation, and maintenance of the organization's infrastructure. You will work closely with software developers, system administrators, and other IT professionals to ensure that the organization's systems are efficient, reliable, and scalable. 

Specific responsibilities will include: 

  • Leading the team in the development and implementation of automation and continuous delivery pipelines using tools such as Jenkins, Terraform, and Ansible. 
  • Managing the organization's infrastructure using Kubernetes, including deployment, scaling, and monitoring of applications. 
  • Ensuring that the organization's systems are secure and compliant with industry standards. 
  • Collaborating with software developers to design and implement infrastructure as code. 
  • Providing mentorship and technical guidance to team members. 
  • Troubleshooting and resolving technical issues in collaboration with other IT professionals. 
  • Participating in the development and maintenance of the organization's disaster recovery and incident response plans. 

To be successful in this role, you should have strong leadership skills and experience with a variety of DevOps and infrastructure tools and technologies. You should also have excellent communication and problem-solving skills, and be able to work effectively in a fast-paced, dynamic environment. 

Read more
APT Portfolio

at APT Portfolio

1 recruiter
Ankita  Pachauri
Posted by Ankita Pachauri
Delhi, Gurugram, Bengaluru (Bangalore)
10 - 15 yrs
₹50L - ₹70L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+13 more

A.P.T Portfolio, a high frequency trading firm that specialises in Quantitative Trading & Investment Strategies.Founded in November 2009, it has been a major liquidity provider in global Stock markets. 


As a manager, you would be incharge of managing the devops team and your remit shall include the following

  • Private Cloud - Design & maintain a high performance and reliable network architecture to support  HPC applications
  • Scheduling Tool - Implement and maintain a HPC scheduling technology like Kubernetes, Hadoop YARN  Mesos, HTCondor or Nomad for processing & scheduling analytical jobs. Implement controls which allow analytical jobs to seamlessly utilize ideal capacity on the private cloud. 
  • Security - Implementing best security practices and implementing data isolation policy between different divisions internally. 
  • Capacity Sizing - Monitor private cloud usage and share details with different teams. Plan capacity enhancements on a quarterly basis. 
  • Storage solution - Optimize storage solutions like NetApp, EMC, Quobyte for analytical jobs. Monitor their performance on a daily basis to identify issues early.
  • NFS - Implement and optimize latest version of NFS for our use case. 
  • Public Cloud - Drive AWS/Google-Cloud utilization in the firm for increasing efficiency, improving collaboration and for reducing cost. Maintain the environment for our existing use cases. Further explore potential areas of using public cloud within the firm. 
  • BackUps  - Identify and automate  back up of all crucial data/binary/code etc in a secured manner at such duration warranted by the use case. Ensure that recovery from back-up is tested and seamless. 
  •  Access Control  - Maintain password less access control and improve security over time. Minimize failures for automated job due to unsuccessful logins. 
  •  Operating System  -Plan, test and roll out new operating system for all production, simulation and desktop environments. Work closely with developers to highlight new performance enhancements capabilities of new versions. 
  •  Configuration management  -Work closely with DevOps/ development team to freeze configurations/playbook for various teams & internal applications. Deploy and maintain standard tools such as Ansible, Puppet, chef etc for the same. 
  •  Data Storage & Security Planning  - Maintain a tight control of root access on various devices. Ensure root access is rolled back as soon the desired objective is achieved.
  • Audit access logs on devices. Use third party tools to put in a monitoring mechanism for early detection of any suspicious activity. 
  • Maintaining all third party tools used for development and collaboration - This shall include maintaining a fault tolerant   environment for GIT/Perforce, productivity tools such as Slack/Microsoft team, build tools like Jenkins/Bamboo etc


Qualifications 

  • Bachelors or Masters Level Degree, preferably in CSE/IT
  • 10+ years of relevant experience in sys-admin function
  • Must have strong knowledge of IT Infrastructure, Linux, Networking and grid.
  • Must have strong grasp of automation & Data management tools.
  • Efficient in scripting languages and python


Desirables

  • Professional attitude, co-operative and mature approach to work, must be focused, structured and well considered, troubleshooting skills.
  •  Exhibit a high level of individual initiative and ownership, effectively collaborate with other team members.

 

APT Portfolio is an equal opportunity employer

Read more
Consulting and Product Engineering Company

Consulting and Product Engineering Company

Agency job
via Exploro Solutions by Sapna Prabhudesai
Hyderabad, Bengaluru (Bangalore), Pune, Chennai
8 - 12 yrs
₹7L - ₹30L / yr
DevOps
Terraform
skill iconDocker
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+1 more

Job Dsecription:

 

○ Develop best practices for team and also responsible for the architecture

○ solutions and documentation operations in order to meet the engineering departments quality and standards

○ Participate in production outage and handle complex issues and works towards Resolution

○ Develop custom tools and integration with existing tools to increase engineering Productivity

 

 

Required Experience and Expertise

 

○ Having a good knowledge of Terraform + someone who has worked on large TF code bases.

○ Deep understanding of Terraform with best practices & writing TF modules.

○ Hands-on experience of GCP  and AWS and knowledge on AWS Services like VPC and VPC related services like (route tables, vpc endpoints, privatelinks) EKS, S3, IAM. Cost aware mindset towards Cloud services.

○ Deep understanding of Kernel, Networking and OS fundamentals

NOTICE PERIOD - Max - 30 days

Read more
Healthifyme

at Healthifyme

1 video
11 recruiters
Sri Harsha Gadde
Posted by Sri Harsha Gadde
Bengaluru (Bangalore)
3 - 7 yrs
₹20L - ₹25L / yr
DevOps
Chef
Puppet
About HealthifyMe : We were founded in 2012 by Tushar Vashisht and Sachin Shenoy, and incubated by Microsoft Accelerator. Today, we happen to be India's largest and most loved health & fitness app with over 4 million users from 220+ cities in India. What makes us unique is our ability to bring together the power of artificial intelligence powered technology and human empathy to deliver measurable impact in our customers' lives. We do this through our team of elite nutritionists & trainers working together with the world's first AI-powered virtual nutritionist - "Ria", our proudest creation till date. Ria references data from over 200 million food & workout logs and 14 million conversations to deliver intelligent health & fitness suggestions to our customers. Ria also happens to be multi-lingual, "she" understands English, French, German, Italian & Hindi. Recently Russia's Sistema and Samsung's AI focussed fund - NEXT, led a USD 12 Million Series B funding into our business. We are the most liked app in India across categories, we've been consistently rated as the no:1 health & fitness app on play store for 3 years running and received Google's "editor's choice award" in 2017. Some of the marquee corporates in the country such as Cognizant, Accenture, Deloitte, Metlife amongst others have also benefited from our employee engagement and wellness programs. Our global aspirations have taken us to MENA, SEA and LATAM regions with more markets to follow. Desired Skills & Experience : Requirements : - Background in Linux/Unix Administration - Experience with Automation/Configuration Management using either Puppet, Chef or an equivalent - Ability to use a wide variety of Open Source Tools - Exp with AWS is a must. - Hands-on exp with at least 3 of RDS, EC2, ELB, EBS, S3, SQS, CodeDeploy, CloudWatch - Strong exp in managing SQL and MySQL Databases - Hands on exp in Managing Web Servers Apache, Nginx, Lighttpd, Tomcat (Apache Exp is valued) - A working understanding of Code and Script (PHP, Python, Perl and/or Ruby) - Knowledge of best practices and IT operations in an always-up, always-available service - Good to have exp in Python and NodeJS Stack : - Python/Django, MySQL, NodeJS+MongoDB - ElasticCache, DyanmoDB, SQS, S3 - Deployed in AWS We'd love to see : - Experience in Python and data modeling - Git and distributed revision control experience - High-profile work on commercial or open-source projects - Take ownership in your work, and understand the need for code quality, elegance, and robust infrastructure Look forward to : - Working with a world-class team. - Fun & work at the same place with an amazing work culture and flexible timings. - Get ready to transform yourself into a health junkie Join HealthifyMe and make history!
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort