Sr.DevOps Engineer (5 to 8 yrs. Exp.)
- Strong Experience in Infrastructure provisioning in cloud using Terraform & AWS CloudFormation Templates.
- Strong Experience in Serverless Containerization technologies such as Kubernetes, Docker etc.
- Strong Experience in Jenkins & AWS Native CI/CD implementation using code
- Strong Experience in Cloud operational automation using Python, Shell script, AWS CLI, AWS Systems Manager, AWS Lamnda, etc.
- Day to Day AWS Cloud administration tasks
- Strong Experience in Configuration management using Ansible and PowerShell.
- Strong Experience in Linux and any scripting language must required.
- Knowledge of Monitoring tool will be added advantage.
- Understanding of DevOps practices which involves Continuous Integration, Delivery and Deployment.
- Hands on with application deployment process
Key Skills: AWS, terraform, Serverless, Jenkins,Devops,CI/CD,Python,CLI,Linux,Git,Kubernetes
Role: Software Developer
Industry Type: IT-Software, Software Services
FunctionalArea:ITSoftware- Application Programming, Maintenance
Employment Type: Full Time, Permanent
Education: Any computer graduate.Salary: Best in Industry.
About Compufy Technolab LLP
This company is a network of the world's best developers - full-time, long-term remote software jobs with better compensation and career growth. We enable our clients to accelerate their Cloud Offering, and Capitalize on Cloud. We have our own IOT/AI platform and we provide professional services on that platform to build custom clouds for their IOT devices. We also build mobile apps, run 24x7 devops/site reliability engineering for our clients.
We are looking for very hands-on SRE (Site Reliability Engineering) engineers with 3 to 6 years of experience. The person will be part of team that is responsible for designing & implementing automation from scratch for medium to large scale cloud infrastructure and providing 24x7 services to our North American / European customers. This also includes ensuring ~100% uptime for almost 50+ internal sites. The person is expected to deliver with both high speed and high quality as well as work for 40 Hours per week (~6.5 hours per day, 6 days per week) in shifts which will rotate every month.
This person MUST have:
- B.E Computer Science or equivalent
- 2+ Years of hands-on experience troubleshooting/setting up of the Linux environment, who can write shell scripts for any given requirement.
- 1+ Years of hands-on experience setting up/configuring AWS or GCP services from SCRATCH and maintaining them.
- 1+ Years of hands-on experience setting up/configuring Kubernetes & EKS and ensuring high availability of container orchestration.
- 1+ Years of hands-on experience setting up CICD from SCRATCH in Jenkins & Gitlab.
- Experience configuring/maintaining one monitoring tool.
- Excellent verbal & written communication skills.
- Candidates with certifications - AWS, GCP, CKA, etc will be preferred
- Hands-on experience with databases (Cassandra, MongoDB, MySQL, RDS).
- Min 3 years of experience as SRE automation engineer building, running, and maintaining production sites. Not looking for candidates who have experience only as L1/L2 or Build & Deploy..
- Remotely, anywhere in India
- The person is expected to deliver with both high speed and high quality as well as work for 40 Hours per week (~6.5 hours per day, 6 days per week) in shifts which will rotate every month.
- Full time/Direct
- We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
- We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.
ketteQ is a supply chain planning and automation platform. We are looking for an experienced AWS Devops Engineer to help manage AWS infrastructure and automation. This job comes with a attractive compensation package, work-from-home and flex-time benefits. You will get to work on projects for large global brands with a highly experienced team based in US and India. If you are high-energy, motivated, and initiative-taking individual then this could be a fantastic opportunity for you. Candidates must meet the following requirements:
Duties & Responsibilities
- Deployment, automation, management, and maintenance of AWS cloud-based production system
- Build a deployment pipeline for AWS and Salesforce
- Design cloud infrastructure that is secure, scalable, and highly available on AWS
- Work collaboratively with software engineering to define infrastructure and deployment requirements
- Provision, configure and maintain AWS cloud infrastructure defined as cloud formation template
- Ensure configuration and compliance with configuration management tools
- Administer and troubleshoot Linux based systems
- Troubleshoot problems across a wide array of services and functional areas
- Build and maintain operational tools for deployment, monitoring, and analysis of AWS infrastructure and systems
- Perform infrastructure cost analysis and optimization
- At least 5 years of experience building and maintaining AWS infrastructure (VPC, EC2, Security Groups, IAM, ECS, Fargate, S3, Cloud Formation)
- Strong understanding of how to secure AWS environments and meet compliance requirements
- Solid foundation of networking and Linux administration
- Experience with Docker, GitHub, Jenkins, Cloud Formation and deploying applications on AWS
- Ability to learn/use a wide variety of open source technologies and tools
- Database experience to help with monitoring and performance; PostgreSql experience preferred
- AWS certification preferred
- Bachelors in Engineering or related field
Our client is a call management solutions company, which helps small to mid-sized businesses use its virtual call center to manage customer calls and queries. It is an AI and cloud-based call operating facility that is affordable as well as feature-optimized. The advanced features offered like call recording, IVR, toll-free numbers, call tracking, etc are based on automation and enhances the call handling quality and process, for each client as per their requirements. They service over 6,000 business clients including large accounts like Flipkart and Uber.
- Being involved in Configuration Management, Web Services Architectures, DevOps Implementation, Build & Release Management, Database management, Backups, and Monitoring.
- Ensuring reliable operation of CI/ CD pipelines
- Orchestrate the provisioning, load balancing, configuration, monitoring and billing of resources in the cloud environment in a highly automated manner
- Logging, metrics and alerting management.
- Creating Docker files
- Creating Bash/ Python scripts for automation.
- Performing root cause analysis for production errors.
What you need to have:
- Proficient in Linux Commands line and troubleshooting.
- Proficient in AWS Services. Deployment, Monitoring and troubleshooting applications in AWS.
- Hands-on experience with CI tooling preferably with Jenkins.
- Proficient in deployment using Ansible.
- Knowledge of infrastructure management tools (Infrastructure as cloud) such as terraform, AWS cloudformation etc.
- Proficient in deployment of applications behind load balancers and proxy servers such as nginx, apache.
- Scripting languages: Bash, Python, Groovy.
- Experience with Logging, Monitoring, and Alerting tools like ELK(Elastic-search, Logstash, Kibana), Nagios. Graylog, splunk Prometheus, Grafana is a plus.
TIFIN is a fintech company backed by industry leaders including JP Morgan, Morningstar, Broadridge and Hamilton Lane.
We build engaging experiences through powerful AI and personalization. We leverage the combined power of investment intelligence, data science, and technology to make investing a more engaging experience and a more powerful driver of financial wellbeing.
At TIFIN, design and behavioral thinking enables engaging customer centered experiences along with software and application programming interfaces (APIs). We use investment science and intelligence to build algorithmic engines inside the software and APIs to enable better investor outcomes.
We hope to change the world of wealth in ways that personalized delivery has changed the world of movies, music and more. In a world where every individual is unique, we believe in the power of AI-based personalization to match individuals to financial advice and investments is necessary to driving wealth goals
- Shared Understanding through Listening and Speaking the Truth. We communicate with radical candor, precision and compassion to create a shared understanding. We challenge, but once a decision is made, commit fully. We listen attentively, speak candidly.
- Teamwork for Teamwin. We believe in win together, learn together. We fly in formation. We cover each other’s backs. We inspire each other with our energy and attitude.
- Make Magic for our Users. We center around the voice of the customer. With deep empathy for our clients, we create technology that transforms investor experiences.
- Grow at the Edge. We are driven by personal growth. We get out of our comfort zone and keep egos aside to find our genius zones. We strive to be the best we can possibly be. No excuses.
- Innovate with Creative Solutions. We believe that disruptive innovation begins with curiosity and creativity. We challenge the status quo and problem solve to find new answers.
WHAT YOU'LL BE DOING:
As part of TIFIN’s technology division, you will be leading the DevOps function of a software product demonstrating leadership abilities
WHO ARE YOU:
- Lead the end-to-end technology/infrastructure environments
- Troubleshoot any issues that arise from deployments and other automations
- Setup and manage security configurations
- Implement systems/tools/processes to monitor performance and security integrity of the technology stack
- Implementing CI/CD from our source control platform (e.g. gitlab)
- Develop automation tools and dashboards to manage and monitor the infrastructure
- Provide technical guidance during software development
- Stay current with industry trends and source new ways for our business to improve
- Setup / decommission technology assets. Maintain the asset and configuration
- Maintain inventory of the relevant environments
- 5+ years of experience with substantial experience in a DevOps Engineering/Lead role
- Strong experience designing and implementing highly available, scalable solutions
- Expertise in planning/implementing BCP / DR policies in line with company objectives
- Strong experience with Linux servers and their administration/troubleshooting
- Strong understanding of Networking concepts and best practices
- Working experience with Docker and Kubernetes
- Hands-on experience with AWS & GCP services like VPC, EC2, S3, ELB, RDS, ECS/EKS, IAM, CloudFront, CloudWatch, SQS/SNS, App Engine, etc.
- Strong experience with databases such as PostgreSQL and Redis
- Knowledge of scripting languages such as Python and Bash
- Expertise in Git (GitHub/ GitLab)
- Experience working with Data Lakes and ETL pipelines
- Experience with project workflow tools such as Jira in an Agile-Scrum environment
- Experience with open-source technologies and cloud services
- Strong communication & interpersonal skills and ability to explain protocol and processes to the team
- Strong troubleshooting skills with the ability to spot issues before they become problems
COMPENSATION AND BENEFITS PACKAGE:
Competitive and commensurate to experience + discretionary annual bonus + ESOPs
About the Tifin Group: The Tifin group combines expertise in finance, technology, entrepreneurship and investing to start and help build a portfolio of brands and companies in areas of investments, wealth management and asset management.
TIFIN companies are centered around the user and emphasize design innovation to build operating systems. We focus on simplifying and democratizing financial science to make it more holistic and integral to users’ lives.
You need to drive automation for implementing scalable and robust applications. You would indulge your dedication and passion to build server-side optimization ensuring low-latency and high-end performance for the cloud deployed within datacentre. You should have sound knowledge of Open stack and Kubernetes domain.
YOUR ‘OKR’ SUMMARY
OKR means Objective and Key Results.
As a DevOps Engineer, you will understand the overall movement of data in the entire platform, find bottlenecks,define solutions, develop key pieces, write APIs and own deployment of those. You will work with internal and external development teams to discover these opportunities and to solve hard problems. You will also guide engineers in solving the complex problems, developing your acceptance tests for those and reviewing the work and
the test results.
What you will do
- As a DevOps Engineer responsible for systems being used by customer across the globe.
- Set the goals for overall system and divide into goals for the sub-system.
- Guide/motivate/convince/mentor the architects on sub-systems and help them achieving improvements with agility and speed.
- Identify the performance bottleneck and come up with the solution to optimize time and cost taken by build/test system.
- Be a thought leader to contribute to the capacity planning for software/hardware, spanning internal and public cloud, solving the trade-off between turnaround time and utilization.
- Bring in technologies enabling massively parallel systems to improve turnaround time by an order of magnitude.
What you will need
A strong sense of ownership, urgency, and drive. As an integral part of the development team, you will need the following skills to succeed.
- BS or BE/B.Tech or equivalent experience in EE/CS with 10+ years of experience.
- Strong background of Architecting and shipping distributed scalable software product with good understanding of system programming.
- Excellent background of Cloud technologies like: OpenStack, Docker, Kubernetes, Ansible, Ceph is must.
- Excellent understanding of hybrid, multi-cloud architecture and edge computing concepts.
- Ability to identify the bottleneck and come up with solution to optimize it.
- Programming and software development skills in Python, Shell-script along with good understanding of distributed systems and REST APIs.
- Experience in working with SQL/NoSQL database systems such as MySQL, MongoDB or Elasticsearch.
- Excellent knowledge and working experience with Docker containers and Virtual Machines.
- Ability to effectively work across organizational boundaries to maximize alignment and productivity between teams.
- Ability and flexibility to work and communicate effectively in a multi-national, multi-time-zone corporate.
- Deep understanding of technology and passionate about what you do.
- Background in designing high performant scalable software systems with strong focus to optimizehardware cost.
- Solid collaborative and interpersonal skills, specifically a proven ability to effectively guide andinfluence within a dynamic environment.
- Strong commitment to get the most performance out of a system being worked on.
- Prior development of a large software project using service-oriented architecture operating with real time constraints.
What's In It for You?
- You will get a chance to work on cloud-native and hyper-scale products
- You will be working with industry leaders in cloud.
- You can expect a steep learning curve.
- You will get the experience of solving real time problems, eventually you become a problem solver.
Benefits & Perks:
- Competitive Salary
- Health Insurance
- Open Learning - 100% Reimbursement for online technical courses.
- Fast Growth - opportunities to grow quickly and surely
- Creative Freedom + Flat hierarchy
- Sponsorship to all those employees who represent company in events and meet ups.
- Flexible working hours
- 5 days week
- Hybrid Working model (Office and WFH)
Our Hiring Process:
Candidates for this position can expect the hiring process as follows (subject to successful clearing of every round)
- Initial Resume screening call with our Recruiting team
- Next, candidates will be invited to solve coding exercises.
- Next, candidates will be invited for first technical interview
- Next, candidates will be invited for final technical interview
- Finally, candidates will be invited for Culture Plus interview with HR
- Candidates may be asked to interview with the Leadership team
- Successful candidates will subsequently be made an offer via email
As always, the interviews and screening call will be conducted via a mix of telephonic and video call.
So, if you are looking at an opportunity to really make a difference- make it with us…
Coredge.io provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable central, state or local laws.
Araali Networks is seeking a highly driven DevOps engineer to help streamline the release and deployment process, while assuring high quality.
Use best practices to streamline the release process
Use IaC methods to manage cloud infrastructure
Use test automation for release qualification
Skills and Qualifications:
Hands-on experience in managing release pipelines
Good understanding of public cloud infrastructure
Working knowledge of python test frameworks like pyunit, and automation tools like Selenium, Cucumber etc
Strong analytical and debugging skills
Bachelor’s degree in Computer Science
Araali Networks is a SaaS based cybersecurity startup that has raised a total of $10M from well known investors like A Capital, Firebolt Ventures and SV Angels, and and through a strategic investment by a publicly traded security company.
The company is disrupting the cloud firewall market by auto-creating nano-perimeters around every cloud app. The Araali solution enables developers to focus on features and improves the security posture through simplification and automation.
The security controls are embedded at the time of DevOps. The precision of Araali controls also helps with security operations where alerts are precise and intelligently routed to the right app team, making it actionable in real-time.
We're building an easy way for product teams to create and experiment with game-like experiences - without a developer. In short - CustomerGlu is a low code interactive engagement platform.
We're backed by Techstars and top-notch VCs from Silicon Valley and India, read a recent Yourstory article here.
We're growing at a fast pace - 1 new customer going live every week, with piling up companies that are in the integration.
Currently, we're looking to onboard a generalist engineer that can work on the cloud architecture, DevOps, security and pick up the challenge of scaling a global SaaS from India.
We're headquartered in the US with a fully owned subsidiary in India.
- Take ownership of the complete infrastructure development and maintenance
- Own, innovate and maintain processes for good DevOps practices within the company
- Educate the team and apply OWASP security principles in software design
- Optimize cost and set alerts across infrastructure
- Build good logging and error tracing design patterns
- Build and maintain auto-scaling infrastructure
- Design systems to meet company SLA and SLO
- Experience in building multi-region/datacenter based cloud infrastructure
- Experience in building highly available deployments (3 Nines or above)
- Experience with building and maintaining CI/CD Pipelines
- Experience in container-based scalable deployments and orchestration tools such as Kubernetes/ECS or others
- Experience with MongoDB Compatible databases. MongoDB, CosmosDB, DocumentDB
- Experience in designing VPCs, multiregional networking, and best practices in ACLs
- Experience with OpenSSL and OpenVPN deployments
- Experience with Distributed data stores like Cassandra, Dynamo DB
- Good understanding of Single Point of failures in distributed system Infra and best practices
- Good understanding of GDPR and data storage design for multi-region systems
Good to have
- Experience with Terraform, Ansible, Jenkins, or similar tools
- Experience with Data streaming infrastructure such as Kafka/Kinesis etc
- Good understanding of security best practices for Infra, databases, and application
- In-depth understanding of OWASP Top 10 vulnerabilities and mitigation strategies
- Proficient in Application Gateways, Loadbalancers and Nginx or other web server configurations
- Experience with configuring Serverless offerings such as lambda or azure functions
- Experience with Spark deployments (Standalone or managed like EMR)
- Broad ownership of the software that enables 1000s of customers globally
- Build leadership of working with a growing engineering team
- Build a cutting edge much-needed product