Striving for excellence is in our DNA.
We are more than just specialists; we are experts in agile software development with a keen focus on Cloud Native D3 (Digital, Data, DevSecOps. We help leading global businesses to imagine, design, engineer, and deliver software and digital experiences that change the world.
Description
Headquartered in Princeton, NJ (United States) we are a multinational company that is growing fast. This role is based out of our India setup.
We believe that we are only as good as the quality of our people. Our offices are digital pods. Our clients are fortune brands. We’re always looking for the most talented and skilled teammates. Do you have it in you?
About The Role
As a DevOps Expert, you are responsible for building and maintaining a frictionless path to production for web applications in order to support continuous delivery and a productive team. You will be working in close collaboration with the business, as well as other teams across StatusNeo.
We offer you a great opportunity to work on cutting edge projects and enhance your knowledge base. You level up your technical skills while performing lots of challenging and interesting tasks.
Responsibilities
-
You will promote best practices across the team, mentor, and coach other team members.
-
You are responsible for building and maintaining a frictionless path to production for web applications in order to support continuous delivery and a productive team.
-
You will also develop and maintain web applications.
Requirements
-
Minimum 3 years of experience in Software Development
-
Minimum 2 years of experience in System Administration and Operations.
-
Expertise in GCP, AWS, Docker, Kubernetes, Terraform, GitHub
-
Programming/Application Development - java, nodejs,
-
Scripting - bash, ruby, python, etc
-
Virtualization and Cloud - VMWare, Openshift, Docker, Kubernetes, etc
-
CI /CD - Jenkins, GitlabCI, etc
-
Configuration Management - Chef, Puppet, Ansible, etc
-
Database - MongoDB, Mysql, Oracle, SQL, etc
-
Monitoring - Infra and Application monitoring, logging, alerting
-
Testing - Infra and Application testing COMPETENCIES
-
Scripting and Automation
-
Programming/Application Development
-
Virtualisation and Cloud
-
CI/CD
-
Configuration Management
-
Database and Storage Management
Good To Have
-
Good command of the English language for written and verbal communication Technical skills
-
Experience working in Agile teams
-
Work within a team of developers
What We Offer
-
National and International Business Trips (if there is an opportunity)
-
Culture of Knowledge Sharing and Training
-
Modern & lively working environment
-
Opportunity to write books, participate in conferences
-
International assignment
-
Relocation opportunities
About StatusNeo
Striving for excellence is in our DNA.
We are more than just specialists; we are experts in agile software development with a keen focus on Cloud Native D3 (Digital, Data, DevSecOps. We help leading global businesses imagine, design, engineer, and deliver software and digital experiences that change the world.
Headquartered in Princeton, NJ (United States) we are multinational company that is growing fast . This role is based out of our India setup.
We believe that we are only as good as the quality of our people. Our offices are digital pods. Our clients are fortune brands. We’re always looking for the most talented and skilled teammates. Do you have it in you?
Similar jobs
Role – Sr. Devops Engineer
Location - Bangalore
Experience 5+ Years
Responsibilities
- Implementing various development, testing, automation tools, and IT infrastructure
- Planning the team structure, activities, and involvement in project management activities.
- Defining and setting development, test, release, update, and support processes for DevOps operation
- Troubleshooting techniques and fixing the code bugs
- Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement and minimizing the wastage
- Encouraging and building automated processes wherever possible
- Incidence management and root cause analysis.
- Selecting and deploying appropriate CI/CD tools
- Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
- Mentoring and guiding the team members
- Monitoring and measuring customer experience and KPIs
Requirements
- 5-6 years of relevant experience in a Devops role.
- Good knowledge in cloud technologies such as AWS/Google Cloud.
- Familiarity with container orchestration services, especially Kubernetes experience (Mandatory) and good knowledge in Docker.
- Experience administering and deploying development CI/CD tools such as Git, Jira, GitLab, or Jenkins.
- Good knowledge in complex Debugging mechanisms (especially JVM) and Java programming experience (Mandatory)
- Significant experience with Windows and Linux operating system environments
- A team player with excellent communication skills.
Shiprocket is a logistics platform which connects Indian eCommerce SMBs with logistics players to enable end-to-end solutions.
Our innovative data-backed platform drives logistics efficiency, helps reduce cost, increases sales throughput by reducing RTO and improves post order customer engagement and experience.
Our vision is to power all logistics for the direct commerce market in India including first mile, linehaul, last mile, warehousing, cross border and O2O.
We are seeking an experienced DevOps Engineer across product lines.
Key Responsibilities
- Deploy, automate, maintain and manage AWS cloud-based production system. Ensure the availability, performance, scalability and security of productions systems.
- Build, release and configuration management of production systems.
- System troubleshooting and problem solving across platform and application domains.
- Suggesting architecture improvements, recommending process improvements.
- Ensuring critical system security through the use of best in class cloud security solutions.
Skill set
- DevOps: Solid experience as a DevOps Engineer in a 24x7 uptime Amazon AWS environment, including automation experience with configuration management tools.
- Scripting Skills: Strong scripting (e.g. Python, shell scripting) and automation skills.
- Monitoring Tools: Experience with system monitoring tools (e.g. Nagios).
- Problem Solving: Ability to analyze and resolve complex infrastructure resource and application deployment issues.
- DB skills: Basic DB administration experience (RDS, MongoDB), experience in setting up and managing of AWS Aurora databases.
- ELK: Proficient in ELK setup
- GitHub: Experienced in maintain and administering of GitHub
- Accountable for proper backup and disaster recovery procedures.
- Experience with Puppet, Chef, Ansible, or Salt
- Professional commitment to high quality, and a passion for learning new skills.
- Detail-oriented individual with the ability to rapidly learn new concepts and technologies.
- Strong problem-solving skills, including providing simple solutions to complex situations.
- Must be a strong team player with the ability to communicate and collaborate effectively in a geographically disperse working environment.
● Manage AWS services and day to day cloud operations.
● Work closely with the development and QA team to make the deployment process
smooth and devise new tools and technologies in order to achieve automation of most
of the components.
● Strengthen the infrastructure in terms of Reliability (configuring HA etc.), Security (cloud
network management, VPC, etc.) and Scalability (configuring clusters, load balancers,
etc.)
● Expert level understanding of DB replication, Sharding (mySQL DB Systems), HA
clusters, Failovers and recovery mechanisms.
● Build and maintain CI-CD (continuous integration/deployment) workflows.
● Having an expert knowledge on AWS EC2, S3, RDS, Cloudfront and other AWS offered
services and products.
● Installation and management of software systems in order to support the development
team e.g. DB installation and administration, web servers, caching and other such
systems.
Requirements:
● B. Tech or Bachelor's in a related field.
● 2-5 years of hands-on experience with AWS cloud services such as EC2, ECS,
Cloudwatch, SQS, S3, CloudFront, route53.
● Experience with setting up CI-CD pipelines and successfully running large scale
systems.
● Experience with source control systems (SVN, GIT etc), Deployment and build
automation tools like Jenkins, Bamboo, Ansible etc.
● Good experience and understanding of Linux/Unix based systems and hands-on
experience working with them with respect to networking, security, administration.
● Atleast 1-2 years of experience with shell/python/perl scripting; having experience with
Bash scripting is an added advantage.
● Experience with automation tasks like, automated backups, configuring fail overs,
automating deployment related process is a must have.
● Good to have knowledge of setting up the ELK stack; Infrastructure as a code services
like Terraform; working and automating processes with AWS SDK/CLI tools with scripts
DevOps Engineer
at Karkinos Healthcare Pvt Ltd
Karkinos Healthcare Pvt. Ltd.
The fundamental principle of Karkinos healthcare is democratization of cancer care in a participatory fashion with existing health providers, researchers and technologists. Our vision is to provide millions of cancer patients with affordable and effective treatments and have India become a leader in oncology research. Karkinos will be with the patient every step of the way, to advise them, connect them to the best specialists, and to coordinate their care.
Karkinos has an eclectic founding team with strong technology, healthcare and finance experience, and a panel of eminent clinical advisors in India and abroad.
Roles and Responsibilities:
- Critical role that involves in setting up and owning the dev, staging, and production infrastructure for the platform that uses micro services, data warehouses and a datalake.
- Demonstrate technical leadership with incident handling and troubleshooting.
- Provide software delivery operations and application release management support, including scripting, automated build and deployment processing and process reengineering.
- Build automated deployments for consistent software releases with zero downtime
- Deploy new modules, upgrades and fixes to the production environment.
- Participate in the development of contingency plans including reliable backup and restore procedures.
- Participate in the development of the end to end CI / CD process and follow through with other team members to ensure high quality and predictable delivery
- Participate in development of CI / CD processes
- Work on implementing DevSecOps and GitOps practices
- Work with the Engineering team to integrate more complex testing into a containerized pipeline to ensure minimal regressions
- Build platform tools that rest of the engineering teams can use.
Apply only if you have:
- 2+ years of software development/technical support experience.
- 1+ years of software development, operations experience deploying and maintaining multi-tiered infrastructure and applications at scale.
- 2+ years of experience in public cloud services: AWS (VPC, EC2, ECS, Lambda, Redshift, S3, API Gateway) or GCP (Kubernetes Engine, Cloud SQL, Cloud Storage, BIG Query, API Gateway, Container Registry) - preferably in GCP.
- Experience managing infra for distributed NoSQL system (Kafka/MongoDB), Containers, Micro services, deployment and service orchestration using Kubernetes.
- Experience and a god understanding of Kubernetes, Service Mesh (Istio preferred), API Gateways, Network proxies, etc.
- Experience in setting up infra for central monitoring of infrastructure, ability to debug, trace
- Experience and deep understanding of Cloud Networking and Security
- Experience in Continuous Integration and Delivery (Jenkins / Maven Github/Gitlab).
- Strong scripting language knowledge, such as Python, Shell.
- Experience in Agile development methodologies and release management techniques.
- Excellent analytical and troubleshooting.
- Ability to continuously learn and make decisions with minimal supervision. You understand that making mistakes means that you are learning.
Interested Applicants can share their resume at sajal.somani[AT]karkinos[DOT]in with subject as "DevOps Engineer".
Lead DevOps Engineer
Responsibilities
- Automate and streamline the deployment activities
- Monitor all production and development servers and ensure for 24/7 availability.
- Implement best practices to ensure security and availability.
- Work closely with development team to understand changes in each release and keep all tools up-to-date to ensure automated deployments
- Own up all infrastructure related troubleshooting during unplanned outages
- Escalate and communicate issues
- Work closely with development team to build platforms are designed for scale, availability and performance.
- Help developers with debugging issues.
Qualifications
- Proficient with Linux administration (backups, maintenance, installation/upgrades)
- Experience with IaC such as Terraform, AWS Cloudformation, etc.
- Experience with CI Tools - Jenkins or any relevant
- Experience on AWS and/or Microsoft Azure Services and Docker.
- Exposure to Monitoring Tools - Nagios, Grafana, Prometheus
- Experience and understanding of any RDBMS, NoSQL data stores. Good to have exposure to neo4j database or any graphical database.
- Working understanding of application code written in nodejs or any other programming language.
- Self-starter and Self-learner
- Great communication skills
- Need experience with following on AWS:
EC2, ECS, ECR, ALB, Cloudwatch, S3, Lambda, Serverless, RDS, Kinesis, CloudFormation
Experience with setting up & testing microservices at scale and come up with relevant CloudWatch alarms and metrics & dashboards.
Networking: VPC, Subnets, NAT Gateway, Certificate Manager, Route 53, route Tables, Security Groups
CI/CD: Automation with Jenkins.
Setting up infrastructure for Dev and Production:
Node js, Neo4j, Nginx / Apache, MongoDB/Atlas, Aurora MySQL, RDS, Redis Cluster.
Job Dsecription:
○ Develop best practices for team and also responsible for the architecture
○ solutions and documentation operations in order to meet the engineering departments quality and standards
○ Participate in production outage and handle complex issues and works towards Resolution
○ Develop custom tools and integration with existing tools to increase engineering Productivity
Required Experience and Expertise
○ Having a good knowledge of Terraform + someone who has worked on large TF code bases.
○ Deep understanding of Terraform with best practices & writing TF modules.
○ Hands-on experience of GCP and AWS and knowledge on AWS Services like VPC and VPC related services like (route tables, vpc endpoints, privatelinks) EKS, S3, IAM. Cost aware mindset towards Cloud services.
○ Deep understanding of Kernel, Networking and OS fundamentals
NOTICE PERIOD - Max - 30 days
What you do :
- Developing automation for the various deployments core to our business
- Documenting run books for various processes / improving knowledge bases
- Identifying technical issues, communicating and recommending solutions
- Miscellaneous support (user account, VPN, network, etc)
- Develop continuous integration / deployment strategies
- Production systems deployment/monitoring/optimization
-
Management of staging/development environments
What you know :
- Ability to work with a wide variety of open source technologies and tools
- Ability to code/script (Python, Ruby, Bash)
- Experience with systems and IT operations
- Comfortable with frequent incremental code testing and deployment
- Strong grasp of automation tools (Chef, Packer, Ansible, or others)
- Experience with cloud infrastructure and bare-metal systems
- Experience optimizing infrastructure for high availability and low latencies
- Experience with instrumenting systems for monitoring and reporting purposes
- Well versed in software configuration management systems (git, others)
- Experience with cloud providers (AWS or other) and tailoring apps for cloud deployment
-
Data management skills
Education :
- Degree in Computer Engineering or Computer Science
- 1-3 years of equivalent experience in DevOps roles.
- Work conducted is focused on business outcomes
- Can work in an environment with a high level of autonomy (at the individual and team level)
-
Comfortable working in an open, collaborative environment, reaching across functional.
Our Offering :
- True start-up experience - no bureaucracy and a ton of tough decisions that have a real impact on the business from day one.
-
The camaraderie of an amazingly talented team that is working tirelessly to build a great OS for India and surrounding markets.
Perks :
- Awesome benefits, social gatherings, etc.
- Work with intelligent, fun and interesting people in a dynamic start-up environment.
At Karza technologies, we take pride in building one of the most comprehensive digital onboarding & due-diligence platforms by profiling millions of entities and trillions of associations amongst them using data collated from more than 700 publicly available government sources. Primarily in the B2B Fintech Enterprise space, we are headquartered in Mumbai in Lower Parel with 100+ strong workforce. We are truly furthering the cause of Digital India by providing the entire BFSI ecosystem with tech products and services that aid onboarding customers, automating processes and mitigating risks seamlessly, in real-time and at fraction of the current cost.
A few recognitions:
- Recognized as Top25 startups in India to work with 2019 by LinkedIn
- Winner of HDFC Bank's Digital Innovation Summit 2020
- Super Winners (Won every category) at Tecnoviti 2020 by Banking Frontiers
- Winner of Amazon AI Award 2019 for Fintech
- Winner of FinTech Spot Pitches at Fintegrate Zone 2018 held at BSE
- Winner of FinShare 2018 challenge held by ShareKhan
- Only startup in Yes Bank Global Fintech Accelerator to win the account during the Cohort
- 2nd place Citi India FinTech Challenge 2018 by Citibank
- Top 3 in Viacom18's Startup Engagement Programme VStEP
What your average day would look like:
- Deploy and maintain mission-critical information extraction, analysis, and management systems
- Manage low cost, scalable streaming data pipelines
- Provide direct and responsive support for urgent production issues
- Contribute ideas towards secure and reliable Cloud architecture
- Use open source technologies and tools to accomplish specific use cases encountered within the project
- Use coding languages or scripting methodologies to solve automation problems
- Collaborate with others on the project to brainstorm about the best way to tackle a complex infrastructure, security, or deployment problem
- Identify processes and practices to streamline development & deployment to minimize downtime and maximize turnaround time
What you need to work with us:
- Proficiency in at least one of the general-purpose programming languages like Python, Java, etc.
- Experience in managing the IAAS and PAAS components on popular public Cloud Service Providers like AWS, Azure, GCP etc.
- Proficiency in Unix Operating systems and comfortable with Networking concepts
- Experience with developing/deploying a scalable system
- Experience with the Distributed Database & Message Queues (like Cassandra, ElasticSearch, MongoDB, Kafka, etc.)
- Experience in managing Hadoop clusters
- Understanding of containers and have managed them in production using container orchestration services.
- Solid understanding of data structures and algorithms.
- Applied exposure to continuous delivery pipelines (CI/CD).
- Keen interest and proven track record in automation and cost optimization.
Experience:
- 1-4 years of relevant experience
- BE in Computer Science / Information Technology
DevOps Solution Architect
Below is the Job details:
Role: DevOps Architect
Experience Level: 8-12 Years
Job Location: Hyderabad
Key Responsibilities :
Look through the various DevOps Tools/Technologies and identify the strengths and provide direction to the DevOps automation team
Out-of-box thought process on the DevOps Automation Platform implementation
Expose various tools and technologies and do POC on integration of the these tools
Evaluate Backend API's for various DevOps tools
Perform code reviews keep in context of RASUI
Mentor the team on the various E2E integrations
Be Liaison in evangelizing the automation solution currently implemented
Bring in various DevOps best Practices/Principles and participate in adoption with various app teams
Must have:
Should possess Bachelors/Masters in computer science with minimum of 8+ years of experience
Should possess minimum 3 years of strong experience in DevOps
Should possess expertise in using various DevOps tools libraries and API's (Jenkins/JIRA/AWX/Nexus/GitHub/BitBucket/SonarQube)
Should possess expertise in optimizing the DevOps stack ( Containers/Kubernetes/Monitoring )
2+ Experience in creating solutions and translate to the development team
Should have strong understanding of OOPs, SDLC (Agile Safe standards)
Proficient in Python , with a good knowledge of its ecosystems (IDEs and Frameworks)
Proficient in various cloud platforms (Azure/AWS/Google cloud platform)
Proficient in various DevOps offerings (Pivotal/OpenStack/Azure DevOps
Regards,
Talent acquisition team
Tetrasoft India
Stay home and Stay safe