Kutumb is the first and largest communities platform for Bharat. We are growing at an exponential trajectory. More than 1 Crore users use Kutumb to connect with their community. We are backed by world-class VCs and angel investors. We are growing and looking for exceptional Infrastructure Engineers to join our Engineering team.
More on this here - https://kutumbapp.com/why-join-us.html">https://kutumbapp.com/why-join-us.html
We’re excited if you have:
- Recent experience designing and building unified observability platforms that enable companies to use the sometimes-overwhelming amount of available data (metrics, logs, and traces) to determine quickly if their application or service is operating as desired
- Expertise in deploying and using open-source observability tools in large-scale environments, including Prometheus, Grafana, ELK (ElasticSearch + Logstash + Kibana), Jaeger, Kiali, and/or Loki
- Familiarity with open standards like OpenTelemetry, OpenTracing, and OpenMetrics
- Familiarity with Kubernetes and Istio as the architecture on which the observability platform runs, and how they integrate and scale. Additionally, the ability to contribute improvements back to the joint platform for the benefit of all teams
- Demonstrated customer engagement and collaboration skills to curate custom dashboards and views, and identify and deploy new tools, to meet their requirements
- The drive and self-motivation to understand the intricate details of a complex infrastructure environment
- Using CICD tools to automatically perform canary analysis and roll out changes after passing automated gates (think Argo & keptn)
- Hands-on experience working with AWS
- Bonus points for knowledge of ETL pipelines and Big data architecture
- Great problem-solving skills & takes pride in your work
- Enjoys building scalable and resilient systems, with a focus on systems that are robust by design and suitably monitored
- Abstracting all of the above into as simple of an interface as possible (like Knative) so developers don't need to know about it unless they choose to open the escape hatch
What you’ll be doing:
- Design and build automation around the chosen tools to make onboarding new services easy for developers (dashboards, alerts, traces, etc)
- Demonstrate great communication skills in working with technical and non-technical audiences
- Contribute new open-source tools and/or improvements to existing open-source tools back to the CNCF ecosystem
Tools we use:
Kops, Argo, Prometheus/ Loki/ Grafana, Kubernetes, AWS, MySQL/ PostgreSQL, Apache Druid, Cassandra, Fluentd, Redis, OpenVPN, MongoDB, ELK
What we offer:
- High pace of learning
- Opportunity to build the product from scratch
- High autonomy and ownership
- A great and ambitious team to work with
- Opportunity to work on something that really matters
- Top of the class market salary and meaningful ESOP ownership
About Kutumb
About
Kutumb is the first and largest communities platform for Bharat. We are growing at an exponential trajectory. More than 1 Crore users use Kutumb to connect with their community. We are backed by world-class VCs and angel investors.
Connect with the team
Similar jobs
- 3+ years of relevant experience
- 2+ years experience with AWS (EC2, ECS, RDS, Elastic Cache, etc)
- Well versed with maintaining infrastructure as code (Terraform, Cloudformation, etc)
- Experience in setting CI/CD pipelines from scratch
- Knowledge of setting up and securing networks (VPN, Intranet, VPC, Peering, etc)
- Understanding of common security issues
What you will do
We are looking for an exceptional engineering lead to join our team. You will be responsible for building and owning the systems that would have critical impact for the business and the experience of our community from day one.
- Build and lead an agile engineering team
- Work closely with Founder on product development
- Collaborate with operations team to understand customer pain points and solve interesting problems
- Code, test, ship - manage the entire application cycle
- Build libraries and documentation for future references
- Research and develop best practices and tools to enable delivery of features
- Set up capabilities to track and report business and user metrics
- Design and improve architecture to ensure scalability
Requirements
- Proven experience at scaling tech companies, preferably in commerce or social network
- Keen to innovate, open-minded and collaborative
- Able to interpret product needs and suggest appropriate solutions
- Have led a team, also able to code hands-on
- Strong communication skills
- Strong work ethic: responsible, responsive, and detail-oriented.
Technologies we use
Go, Flutter, AWS, Google Cloud
At Egnyte we build and maintain our flagship software: a secure content platform used by companies like Red Bull and Yamaha.
We store, analyze, organize, and secure billions of files and petabytes of data with millions of users. We observe more than 1M API requests per minute on average. To make that possible and to provide the best possible experience, we rely on great engineers. For us, people who own their work from start to finish are integral. Our Engineers are part of the process from design to code, to test, to deployment, and back again for further iterations.
We have 300+ engineers spread across the US, Poland, and India.
You will be part of our DevOps Team working closely with our DBA team in automating, monitoring, and scaling our massive MySQL cluster. Previous MySQL experience is a plus.
Your day-to-day at Egnyte
- Designing, building, and maintaining cloud environments (using Terraform, Puppet or Kubernetes)
- Migrating services to cloud-based environments
- Collaborating with software developers and DBAs to create a reliable and scalable infrastructure for our product.
About you
- 2+ years of proven experience in a DevOps Engineer, System Administrator or Developer role, working on infrastructure or build processes
- Programming prowess (Python, Java, Ruby, Golang, or JavaScript)
- Experience with databases (MySQL or Postgress or RDS/Aurora or others)
- Experience with public cloud services (GCP/AWS/Azure)
- Good understanding of the Linux Operating System on the administration level
- Preferably you have experience with HA solutions: our tools of choice include Orchestrator, Proxysql, HAProxy, Corosync & Pacemaker, etc.
- Experience with metric-based monitoring solutions (Cloud: CloudWatch/Stackdriver, On-prem: InfluxDB/OpenTSDB/Prometheus)
- Drive to grow as a DevOps Engineer (we value open-mindedness and a can-do attitude)
Key Responsibilities:
- Develop and Maintain CI/CD Pipelines: Design, implement, and manage CI/CD pipelines using GitOps practices.
- Kubernetes Management: Deploy, manage, and troubleshoot Kubernetes clusters to ensure high availability and scalability of applications.
- Cloud Infrastructure: Design, deploy, and manage cloud infrastructure on AWS, utilizing services such as EC2, S3, RDS, Lambda, and others.
- Infrastructure as Code: Implement and manage infrastructure using IaC tools like Terraform, CloudFormation, or similar.
- Monitoring and Logging: Set up and manage monitoring, logging, and alerting systems to ensure the health and performance of the infrastructure.
- Automation: Identify and automate repetitive tasks to improve efficiency and reliability.
- Security: Implement security best practices and ensure compliance with industry standards.
- Collaboration: Work closely with development, QA, and operations teams to ensure seamless integration and delivery of products.
Required Skills and Qualifications:
- Experience: 2-5 years of experience in a DevOps role.
- AWS: In-depth knowledge of AWS services and solutions.
- CI/CD Tools: Experience with CI/CD tools such as Jenkins, GitLab CI, CircleCI, or similar.
- GitOps Expertise: Proficient in GitOps methodologies and tools.
- Kubernetes: Strong hands-on experience with Kubernetes and container orchestration.
- Scripting and Automation: Proficient in scripting languages such as Bash, Python, or similar.
- Infrastructure as Code (IaC): Hands-on experience with IaC tools like Terraform, CloudFormation, or similar.
- Monitoring Tools: Familiarity with monitoring and logging tools like Prometheus, Grafana, ELK stack, or similar.
- Version Control: Strong understanding of version control systems, primarily Git.
- Problem-Solving: Excellent problem-solving and debugging skills.
- Collaboration: Ability to work in a fast-paced, collaborative environment.
- Education: Bachelor’s or master’s degree in computer science or a related field.
We are looking "Sr.Software Engineer(Devops)" for Reputed Client @ Bangalore Permanent Role.
Experience: 4+ Yrs
Responsibilities:
• As part of a team you will design, develop, and maintain scalable multi cloud DevOps blueprint.
• Understand overall virtualization platform architecture in cloud environments and design best of class solutions that fit the SaaS offering & legacy application modernization
• Continuously improve CI/CD pipeline, tools, processes and procedures and systems relating to Developer Productivity
• Collaborate continuously with the product development teams to implement CI/CD pipeline.
• Contribute to the subject matter on Developer Productivity, DevOps, Infrastructure Automation best practices.
Mandatory Skills:
• 1+ years of commercial server-side software development experience & 3+ years of commercial DevOps experience.
• Strong scripting skills (Java or Python) is a must.
• Experience with automation tools such as Ansible, Chef, Puppet etc.
• Hands-on experience with CI/CD tools such as GitLab, Jenkins, Nexus, Artifactory, Maven, Gradle
• Hands-on working experience in developing or deploying microservices is a must.
• Hands-on working experience of at least of the popular cloud infrastructure such as AWS / Azure / GCP / Red Hat OpenStack is a must.
• Knowledge about microservices hosted in leading cloud environments
• Experience with containerizing applications (Docker preferred) is a must
• Hands-on working experience of automating deployment, scaling, and management of containerized applications (Kubernetes) is a must.
• Strong problem-solving, analytical skills and good understanding of the best practices for building, testing, deploying and monitoring software
Mandatory Skills:
• Experience working with Secret management services such as HashiCorp Vault is desirable.
• Experience working with Identity and access management services such as Okta, Cognito is desirable.
• Experience with monitoring systems such as Prometheus, Grafana is desirable.
Educational Qualifications and Experience:
• B.E/B.Tech/MCA/M.Tech (Computer science/Information science/Information Technology is a Plus)
• 4 to 6 years of hands-on experience in server-side application development & DevOps
About Us
At Digilytics™, we build and deliver easy to use AI products to the secured lending and consumer industry sectors. In an ever-crowded world of clever technology solutions looking for a problem to solve, our solutions start with a keen understanding of what creates and what destroys value in our clients’ business.
Founded by Arindom Basu (Founding member of Infosys Consulting), the leadership of Digilytics™ is deeply rooted in leveraging disruptive technology to drive profitable business growth. With over 50 years of combined experience in technology-enabled change, the Digilytics™ leadership is focused on building a values-first firm that will stand the test of time.
We are currently focused on developing a product, Revel FS, to revolutionise loan origination for mortgages and secured lending. We are also developing a second product, Revel CI, focused on improving trade (secondary) sales to consumer industry clients like auto and FMCG players.
The leadership strongly believes in the ethos of enabling intelligence across the organization. Digiliytics AI is headquartered in London, with a presence across India.
Website: http://www.digilytics.ai">www.digilytics.ai
- Know about our product
- https://www.digilytics.ai/RevEL/Digilytics">Digilytics RelEL
- https://www.digilytics.ai/RevUP/">Digilytics RelUP
- What's it like working at Digilytics https://www.digilytics.ai/about-us.html">https://www.digilytics.ai/about-us.html
- Digilytics featured in Forbes: https://bit.ly/3zDQc4z">https://bit.ly/3zDQc4z
Responsibilities
- Experience with Azure services (Virtual machines, Containers, Databases, Security/Firewall, Function Apps etc)
- Hands-on experience on Kubernetes/Docker/helm.
- Deployment of java Builds & administration/configuration of Nginx/Reverse Proxy, Load balancer, Ms-SQL, Github, Disaster Recovery,
- Linux – Must have basic knowledge- User creation/deletion, ACL, LVM etc.
- CI/CD - Azure DevOps or any other automation tool like Terraform, Jenkins etc.
- Experience with SharePoint and O365 administration
- Azure/Kubernetes certification will be preferred.
- Microsoft Partnership experience is good to have.
- Excellent understanding of required technologies
- Good interpersonal skills and the ability to communicate ideas clearly at all levels
- Ability to work in unfamiliar business areas and to use your skills to create solutions
- Ability to both work in and lead a team and to deliver and accept peer review
- Flexible approach to working environment and hours to meet the needs of the business and clients
Must Haves:
- Hands-on experience on Kubernetes/Docker/helm.
- Experience on Azure/Aws or any other cloud provider.
- Linux & CI/CD tools knowledge.
Experience & Education:
- A start up mindset with proven experience working in both smaller and larger organizations having multicultural exposure
- Between 4-9 years of experience working closely with the relevant technologies, and developing world-class software and solutions
- Domain and industry experience by serving customers in one or more of these industries - Financial Services, Professional Services, other Retail Consumer Services
- A bachelor's degree, or equivalent, in Software Engineering and Computer Science
Araali Networks is seeking a highly driven DevOps engineer to help streamline the release and deployment process, while assuring high quality.
Responsibilities:
Use best practices to streamline the release process
Use IaC methods to manage cloud infrastructure
Use test automation for release qualification
Skills and Qualifications:
Hands-on experience in managing release pipelines
Good understanding of public cloud infrastructure
Working knowledge of python test frameworks like pyunit, and automation tools like Selenium, Cucumber etc
Strong analytical and debugging skills
Bachelor’s degree in Computer Science
About Araali:
Araali Networks is a SaaS based cybersecurity startup that has raised a total of $10M from well known investors like A Capital, Firebolt Ventures and SV Angels, and and through a strategic investment by a publicly traded security company.
The company is disrupting the cloud firewall market by auto-creating nano-perimeters around every cloud app. The Araali solution enables developers to focus on features and improves the security posture through simplification and automation.
The security controls are embedded at the time of DevOps. The precision of Araali controls also helps with security operations where alerts are precise and intelligently routed to the right app team, making it actionable in real-time.
As DevOps Engineer, you are responsible to setup and maintain GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, Cloud security.
- Setup, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi hosting cloud environments.
- Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
- Working on Docker images and maintaining Kubernetes clusters.
- Develop and maintain the automation scripts using Ansible or other available tools.
- Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
- Working on Cloud security tools to keep applications secured.
- Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve successful implementation of integrated solutions within the portfolio.
- Required Technical and Professional Expertise.
- Minimum 4-6 years of experience in IT industry.
- Expertise in implementing and managing Devops CI/CD pipeline.
- Experience in DevOps automation tools. And Very well versed with DevOps Frameworks, Agile.
- Working knowledge of scripting using shell, Python, Terraform, Ansible or puppet or chef.
- Experience and good understanding in any of Cloud like AWS, Azure, Google cloud.
- Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
- Experience with working with ticketing tools.
- Middleware technologies knowledge or database knowledge is desirable.
- Experience and well versed with Jira tool is a plus.
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. However, we assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.
Job Title: |
Senior Cloud Infrastructure Engineer (AWS) |
||
Department & Team |
Technology |
Location: |
India /UK / Ukraine |
Reporting To: |
Infrastructure Services Manager |
Role Purpose: |
The purpose of the role is to ensure high systems availability across a multi-cloud environment, enabling the business to continue meeting its objectives.
This role will be mostly AWS / Linux focused but will include a requirement to understand comparative solutions in Azure.
Desire to maintain full hands-on status but to add Team Lead responsibilities in future
Client’s cloud strategy is based around a dual vendor solutioning model, utilising AWS and Azure services. This enables us to access more technologies and helps mitigate risks across our infrastructure.
The Infrastructure Services Team is responsible for the delivery and support of all infrastructure used by Client twenty-four hours a day, seven days a week. The team’s primary function is to install, maintain, and implement all infrastructure-based systems, both On Premise and Cloud Hosted. The Infrastructure Services group already consists of three teams:
1. Network Services Team – Responsible for IP Network and its associated components 2. Platform Services Team – Responsible for Server and Storage systems 3. Database Services Team – Responsible for all Databases
This role will report directly into the Infrastructure Services Manager and will have responsibility for the day to day running of the multi-cloud environment, as well as playing a key part in designing best practise solutions. It will enable the Client business to achieve its stated objectives by playing a key role in the Infrastructure Services Team to achieve world class benchmarks of customer service and support.
|
Responsibilities: |
Operations · Deliver end to end technical and user support across all platforms (On-premise, Azure, AWS) · Day to day, fully hands-on OS management responsibilities (Windows and Linux operating systems) · Ensure robust server patching schedules are in place and meticulously followed to help reduce security related incidents. · Contribute to continuous improvement efforts around cost optimisation, security enhancement, performance optimisation, operational efficiency and innovation. · Take an ownership role in delivering technical projects, ensuring best practise methods are followed. · Design and deliver solutions around the concept of “Planning for Failure”. Ensure all solutions are deployed to withstand system / AZ failure. · Work closely with Cloud Architects / Infrastructure Services Manager to identify and eliminate “waste” across cloud platforms. · Assist several internal DevOps teams with day to day running of pipeline management and drive standardisation where possible. · Ensure all Client data in all forms are backed up in a cost-efficient way. · Use the appropriate monitoring tools to ensure all cloud / on-premise services are continuously monitored. · Drive utilisation of most efficient methods of resource deployment (Terraform, CloudFormation, Bootstrap) · Drive the adoption, across the business, of serverless / open source / cloud native technologies where applicable. · Ensure system documentation remains up to date and designed according to AWS/Azure best practise templates. · Participate in detailed architectural discussions, calling on internal/external subject matter experts as needed, to ensure solutions are designed for successful deployment. · Take part in regular discussions with business executives to translate their needs into technical and operational plans. · Engaging with vendors regularly in terms of verifying solutions and troubleshooting issues. · Designing and delivering technology workshops to other departments in the business. · Takes initiatives for improvement of service delivery. · Ensure that Client delivers a service that resonates with customer’s expectations, which sets Client apart from its competitors. · Help design necessary infrastructure and processes to support the recovery of critical technology and systems in line with contingency plans for the business. · Continually assess working practices and review these with a view to improving quality and reducing costs. · Champions the new technology case and ensure new technologies are investigated and proposals put forward regarding suitability and benefit. · Motivate and inspire the rest of the infrastructure team and undertake necessary steps to raise competence and capability as required. · Help develop a culture of ownership and quality throughout the Infrastructure Services team.
|
Skills & Experience: |
· AWS Certified Solutions Architect – Professional - REQUIRED · Microsoft Azure Fundamentals AZ-900 – REQUIRED AS MINIMUM AZURE CERT · Red Hat Certified Engineer (RHCE ) - REQUIRED · Must be able to demonstrate working knowledge of designing, implementing and maintaining best practise AWS solutions. (To lesser extend Azure) · Proven examples of ownership of large AWS project implementations in Enterprise settings. · Experience managing the monitoring of infrastructure / applications using tools including CloudWatch, Solarwinds, New Relic, etc. · Must have practical working knowledge of driving cost optimisation, security enhancement and performance optimisation. · Solid understanding and experience of transitioning IaaS solutions to serverless technology · Must have working production knowledge of deploying infrastructure as code using Terraform. · Need to be able to demonstrate security best-practise when designing solutions in AWS. · Working knowledge around optimising network traffic performance an delivering high availability while keeping a check on costs. · Working experience of ‘On Premise to Cloud’ migrations · Experience of Data Centre technology infrastructure development and management · Must have experience working in a DevOps environment · Good working knowledge around WAN connectivity and how this interacts with the various entry point options into AWS, Azure, etc. · Working knowledge of Server and Storage Devices · Working knowledge of MySQL and SQL Server / Cloud native databases (RDS / Aurora) · Experience of Carrier Grade Networking - On Prem and Cloud · Experience in virtualisation technologies · Experience in ITIL and Project management · Providing senior support to the Service Delivery team. · Good understanding of new and emerging technologies · Excellent presentation skills to both an internal and external audience · The ability to share your specific expertise to the rest of the Technology group · Experience with MVNO or Network Operations background from within the Telecoms industry. (Optional) · Working knowledge of one or more European languages (Optional)
|
Behavioural Fit: |
· Professional appearance and manner · High personal drive; results oriented; makes things happen; “can do attitude” · Can work and adapt within a highly dynamic and growing environment · Team Player; effective at building close working relationships with others · Effectively manages diversity within the workplace · Strong focus on service delivery and the needs and satisfaction of internal clients · Able to see issues from a global, regional and corporate perspective · Able to effectively plan and manage large projects · Excellent communication skills and interpersonal skills at all levels · Strong analytical, presentation and training skills · Innovative and creative · Demonstrates technical leadership · Visionary and strategic view of technology enablers (creative and innovative) · High verbal and written communication ability, able to influence effectively at all levels · Possesses technical expertise and knowledge to lead by example and input into technical debates · Depth and breadth of experience in infrastructure technologies · Enterprise mentality and global mindset · Sense of humour
|
Role Key Performance Indicators: |
· Design and deliver repeatable, best in class, cloud solutions. · Pro-actively monitor service quality and take action to scale operational services, in line with business growth. · Generate operating efficiencies, to be agreed with Infrastructure Services Manager. · Establish a “best in sector” level of operational service delivery and insight. · Help create an effective team. |
Required Skills and Experience
- 4+ years of relevant experience with DevOps tools Jenkins, Ansible, Chef etc
- 4+ years of experience in continuous integration/deployment and software tools development experience with Python and shell scripts etc
- Building and running Docker images and deployment on Amazon ECS
- Working with AWS services (EC2, S3, ELB, VPC, RDS, Cloudwatch, ECS, ECR, EKS)
- Knowledge and experience working with container technologies such as Docker and Amazon ECS, EKS, Kubernetes
- Experience with source code and configuration management tools such as Git, Bitbucket, and Maven
- Ability to work with and support Linux environments (Ubuntu, Amazon Linux, CentOS)
- Knowledge and experience in cloud orchestration tools such as AWS Cloudformation/Terraform etc
- Experience with implementing "infrastructure as code", “pipeline as code” and "security as code" to enable continuous integration and delivery
- Understanding of IAM, RBAC, NACLs, and KMS
- Good communication skills
Good to have:
- Strong understanding of security concepts, methodologies and apply them such as SSH, public key encryption, access credentials, certificates etc.
- Knowledge of database administration such as MongoDB.
- Knowledge of maintaining and using tools such as Jira, Bitbucket, Confluence.
- Work with Leads and Architects in designing and implementation of technical infrastructure, platform, and tools to support modern best practices and facilitate the efficiency of our development teams through automation, CI/CD pipelines, and ease of access and performance.
- Establish and promote DevOps thinking, guidelines, best practices, and standards.
- Contribute to architectural discussions, Agile software development process improvement, and DevOps best practices.