Responsibilities: The DevOps Engineer's core responsibilities include automated configuration and management of infrastructure, continuous integration and delivery of distributed systems at scale in a Hybrid environment.
Must-Have: ● You have 4-10 years of experience in DevOps ● You have experience in managing IT infrastructure at scale ● You have experience in automation of deployment of distributed systems and in infrastructure provisioning at scale. ● You have in-depth hands-on experience on Linux and Linux-based systems, Linux scripting ● You have experience in Server hardware, Networking, firewalls ● You have experience in source code management, configuration management, continuous integration, continuous testing, continuous monitoring ● You have experience with CI/CD and related tools * You have experience with Monitoring tools like ELK, Grafana, Prometheus ● You have experience with containerization, container orchestration, management ● Have a penchant for solving complex and interesting problems. ● Worked in startup-like environments with high levels of ownership and commitment. ● BTech, MTech or Ph.D. in Computer Science or related Technical Discipline
● Knowledge of building micro-services. ● Experience in managing cloud infrastructure with disaster recovery and security in mind (AWS, GCP, Azure). ● Experience with High Availability clusters setup. ● Experience in creating alerting and monitoring strategies. ● Strong debugging skills. ● Experience with 0 downtime Continuous Delivery setup (Jenkins, AWS Code Deploy, Team City, Go CD etc). ● Experience with Infrastructure as Code & Automation tools (Bash, Ansible, Puppet, Chef, Terraform etc). ● Master of *nix systems, including working with docker, process & network monitoring tools. ● Knowledge of monitoring tools like New Relic, App Dynamics etc. ● Experience with Messaging systems (RMQ, Kafka etc. ). ● Knowledge of DevOps Intelligence. ● Experience in setting up & driving DevOps initiatives in side the org Excellen. ● Good team player. ● Good to have experience in Kubernetes cluster management.
About the Role As part of the engineering team, you would be expected to have deep technology expertise with a passion for building highly scalable products. This is a unique opportunity where you can impact the lives of people across 150+ countries!
Responsibilities • Develop Collaborate in large-scale systems design discussions. • Deploying and maintaining in-house/customer systems ensuring high availability, performance and optimal cost. • Automate build pipelines. Ensuring right architecture for CI/CD • Work with engineering leaders to ensure cloud security • Develop standard operating procedures for various facets of Infrastructure services (CI/CD, Git Branching, SAST, Quality gates, Auto Scaling) • Perform & automate regular backups of servers & databases. Ensure rollback and restore capabilities are Realtime and with zero-downtime. • Lead the entire DevOps charter for ONE Championship. Mentor other DevOps engineers. Ensure industry standards are followed.
Requirements • Overall 5+ years of experience in as DevOps Engineer/Site Reliability Engineer • B.E/B.Tech in CS or equivalent streams from institute of repute • Experience in Azure is a must. AWS experience is a plus • Experience in Kubernetes, Docker, and containers • Proficiency in developing and deploying fully automated environments using Puppet/Ansible and Terraform • Experience with monitoring tools like Nagios/Icinga, Prometheus, AlertManager, Newrelic • Good knowledge of source code control (git) • Expertise in Continuous Integration and Continuous Deployment setup using Azure Pipeline or Jenkins • Strong experience in programming languages. Python is preferred • Experience in scripting and unit testing • Basic knowledge of SQL & NoSQL databases • Strong Linux fundamentals • Experience in SonarQube, Locust & Browserstack is a plus
At Karza technologies, we take pride in building one of the most comprehensive digital onboarding & due-diligence platforms by profiling millions of entities and trillions of associations amongst them using data collated from more than 700 publicly available government sources. Primarily in the B2B Fintech Enterprise space, we are headquartered in Mumbai in Lower Parel with 100+ strong workforce. We are truly furthering the cause of Digital India by providing the entire BFSI ecosystem with tech products and services that aid onboarding customers, automating processes and mitigating risks seamlessly, in real-time and at fraction of the current cost.
A few recognitions:
Recognized as Top25 startups in India to work with 2019 by LinkedIn
Winner of HDFC Bank's Digital Innovation Summit 2020
Super Winners (Won every category) at Tecnoviti 2020 by Banking Frontiers
Winner of Amazon AI Award 2019 for Fintech
Winner of FinTech Spot Pitches at Fintegrate Zone 2018 held at BSE
Winner of FinShare 2018 challenge held by ShareKhan
Only startup in Yes Bank Global Fintech Accelerator to win the account during the Cohort
2nd place Citi India FinTech Challenge 2018 by Citibank
Top 3 in Viacom18's Startup Engagement Programme VStEP
What your average day would look like:
Deploy and maintain mission-critical information extraction, analysis, and management systems
Manage low cost, scalable streaming data pipelines
Provide direct and responsive support for urgent production issues
Contribute ideas towards secure and reliable Cloud architecture
Use open source technologies and tools to accomplish specific use cases encountered within the project
Use coding languages or scripting methodologies to solve automation problems
Collaborate with others on the project to brainstorm about the best way to tackle a complex infrastructure, security, or deployment problem
Identify processes and practices to streamline development & deployment to minimize downtime and maximize turnaround time
What you need to work with us:
Proficiency in at least one of the general-purpose programming languages like Python, Java, etc.
Experience in managing the IAAS and PAAS components on popular public Cloud Service Providers like AWS, Azure, GCP etc.
Proficiency in Unix Operating systems and comfortable with Networking concepts
Experience with developing/deploying a scalable system
Experience with the Distributed Database & Message Queues (like Cassandra, ElasticSearch, MongoDB, Kafka, etc.)
Experience in managing Hadoop clusters
Understanding of containers and have managed them in production using container orchestration services.
Solid understanding of data structures and algorithms.
Applied exposure to continuous delivery pipelines (CI/CD).
Keen interest and proven track record in automation and cost optimization.
AWS DevOps Engineer Goodera is looking for an experienced and motivated DevOps professional to be an integral part of its core infrastructure team. As a DevOps Engineer, you must be able to troubleshoot production issues, design, implement, and deploy monitoring tools, collaborate with team members to improve the existing and develop new engineering tools, optimize company's computing architecture, design and conduct security, performance, availability and availability tests.
Responsibilities:
This is a highly accountable role and the candidate must meet the following professional expectations: • Owning and improving the scalability and reliability of our products. • Working directly with product engineering and infrastructure teams. • Designing and developing various monitoring system tools. • Accountable for developing deployment strategies and build configuration management. • Deploying and updating system and application software. • Ensure regular, effective communication with team members and cross-functional resources. • Maintaining a positive and supportive work culture. • First point of contact for handling customer (may be internal stakeholders) issues, providing guidance and recommendations to increase efficiency and reduce customer incidents. • Develop tooling and processes to drive and improve customer experience, create playbooks. • Eliminate manual tasks via configuration management. • Intelligently migrate services from one AWS region to other AWS regions. • Create, implement and maintain security policies to ensure ISO/ GDPR / SOC / PCI compliance. • Verify infrastructure Automation meets compliance goals and is current with disaster recovery plan. • Evangelize configuration management and automation to other product developers. • Keep himself updated with upcoming technologies to maintain the state of the art infrastructure.
Required Candidate profile : • 3+ years of proven experience working in a DevOps environment. • 3+ years of proven experience working in AWS Cloud environments. • Solid understanding of networking and security best practices. • Experience with infrastructure-as-code frameworks such as Ansible, Terraform, Chef, Puppet, CFEngine, etc. • Experience in scripting or programming languages (Bash, Python, PHP, Node.js, Perl, etc.) • Experience designing and building web application environments on AWS, including services such as ECS, ECR, Foregate, Lambda, SNS / SQS, CloudFront, Code Build, Code pipeline, Configuring CloudWatch, WAF, Active Directories, Kubernetes (EKS), EC2, S3, ELB, RDS, Redshift etc. • Hands on Experience in Docker is a big plus. • Experience working in an Agile, fast paced, DevOps environment. • Strong Knowledge in DB such as MongoDB / MySQL / DynamoDB / Redis / Cassandra. • Experience with Open Source and tools such as Haproxy, Apache, Nginx and Nagios etc. • Fluency with version control systems with a preference for Git * • Strong Linux-based infrastructures, Linux administration • Experience with installing and configuring application servers such as WebLogic, JBoss and Tomcat. • Hands-on in logging, monitoring and alerting tools like ELK, Grafana, Metabase, Monit, Zbbix etc. • A team player capable of high performance, flexibility in a dynamic working environment and the ability to lead. d ability to rain others on technical and procedural topics.
YOE: 1- 3years Skill: Python, Docker or Ansible , AWS
➢ Experience Building a multi-region highly available auto-scaling infrastructure that optimizes performance and cost. plan for future infrastructure as well as Maintain & optimize existing infrastructure. ➢ Conceptualize, architect and build automated deployment pipelines in a CI/CD environment like Jenkins. ➢ Conceptualize, architect and build a containerized infrastructure using Docker,Mesosphere or similar SaaS platforms. Work with developers to institute systems, policies and workflows which allow for rollback of deployments Triage release of applications to production environment on a daily basis. ➢ Interface with developers and triage SQL queries that need to be executed inproduction environments. ➢ Maintain 24/7 on-call rotation to respond and support troubleshooting of issues in production. ➢ Assist the developers and on calls for other teams with post mortem, follow up and review of issues affecting production availability. ➢ Establishing and enforcing systems monitoring tools and standards ➢ Establishing and enforcing Risk Assessment policies and standards ➢ Establishing and enforcing Escalation policies and standards
5+ years of experience with developing, deploying, and debugging solutions on the AWS platform using ALL AWS services such as S3, IAM, Lambda, API Gateway, RDS, Cognito, Cloudtrail, CodePipeline, Cloud Formation, Cloudwatch and WAF (Web Application Firewall).
Amazon Web Services (AWS) Certified Developer: Associate, is required; Amazon Web Services (AWS) DevOps Engineer: Professional, preferred.
5+ years of experience using one or more modern programming languages (Python, Node.js).
Hands-on experience migrating data to the AWS cloud platform
Experience with Scrum/Agile methodology.
Good understanding of core AWS services, uses, and basic AWS architecture best practices (including security and scalability)
Experience with AWS Data Storage Tools.
Experience in Configure and implement AWS tools such as CloudWatch, CloudTrail and direct system logs for monitoring.
Experience working with GIT, or similar tools.
Ability to communicate and represent AWS Recommendations and Standards.
About Artivatic :- Artivatic is technology startup that uses AI/ML/Deeplearning to build intelligent products & solutions for finance, healthcare & insurance businesses. It is based out of Bangalore with 20+ team focus on technology. Artivatic building is cutting edge solutions to enable 750 Millions plus people to get insurance, financial access and health benefits with alternative data sources to increase their productivity, efficiency, automation power and profitability, hence improving their way of doing business more intelligently & seamlessly. - Artivatic offers lending underwriting, credit/insurance underwriting, fraud, prediction, personalization, recommendation, risk profiling, consumer profiling intelligence, KYC Automation & Compliance, automated decisions, monitoring, claims processing, sentiment/psychology behaviour, auto insurance claims, travel insurance, disease prediction for insurance and more. - We have raised US $300K earlier and built products successfully and also done few PoCs successfully with some top enterprises in Insurance, Banking & Health sector. Currently, 4 months away from generating continuous revenue.- We are looking for a Senor DevOps Engineer to support AWS /Google/Azure/Digital Ocean /Internal ops infrastructure. - In this role, you are expected to play a key role in building the continuous deployment model and enable high quality rapid delivery of Cloud, Device, Mobility, Web and Desktop applications for our flagship platform, Artivatic.Responsibilities :- Design, implement, and maintain ops infrastructure for Artivatic to ensure high availability of environment and services all the time.- Provide a robust continuous delivery pipeline that enables engineering team to deliver features to customers efficiently.Skills Requirements :- 3-8 years of experience in building/maintaining infrastructures primarily on AWS/Google/Azure/Internal environment for Cloud. Mobility, Web and Desktop applications. Background in Linux/Unix Administration.- Demonstrated depth of experience in communicating with internal groups, including Product Management and Operations.- Proven ownership and delivery of a sizable product or product component, strong sense of independence and self-direction is essential.- Availability for travel and flexible work hours to work with teams across time zonesTechnical Competencies :- 3-8 Years of Experience providing DevOps support to Desktop, Cloud, Mobility and Web Based SaaS based solution with continuous deployment model- Deep knowledge of CI (Continuous Integration) and CD (Continuous Deployment) methodologies with Continuous Integration Servers such as Jenkins- Experience in Virtualization, Disaster Recovery, Backup and Security Processes. Experience with automation/ configuration management using either Puppet, Chef or an equivalent- Ability to use a wide variety of open source technologies and cloud services (experience with AWS is required, Azure and Google Cloud is good to have). Knowledge of best practices and IT operations in an always-up, always-available service- Experience with monitoring tools such as Sensu, Newrelic etc.- Strong experience with Cassandra, REST API, SQL, MySQL and NoSQL.- Experience with container based deployments e.g. Docker- A working understanding of code and script (Java, C++, Scala, PHP, Python, Perl and/or Ruby)- Jenkins, ant, maven or make tools with build, packaging, branching, continuous integration background- Maintenance of automated tooling code in shell, python or similar scripting language- Strong familiarity with JIRA, SCM/GIT including branching and merging strategiesQualifications : Bachelors or Master's degree in Computer Science or related field.
Read more
Get to hear about interesting companies hiring right now