Why you should join us
- You will join the mission to create positive impact on millions of peoples lives
- You get to work on the latest technologies in a culture which encourages experimentation - You get to work with super humans (Psst: Look up these super human1, super human2, super human3, super human4)
- You get to work in an accelerated learning environment
What you will do
- You will provide deep technical expertise to your team in building future ready systems.
- You will help develop a robust roadmap for ensuring operational excellence
- You will setup infrastructure on AWS that will be represented as code
- You will work on several automation projects that provide great developer experience
- You will setup secure, fault tolerant, reliable and performant systems
- You will establish clean and optimised coding standards for your team that are well documented
- You will set up systems in a way that are easy to maintain and provide a great developer experience
- You will actively mentor and participate in knowledge sharing forums
- You will work in an exciting startup environment where you can be ambitious and try new things :)
You should apply if
- You have a strong foundation in Computer Science concepts and programming fundamentals
- You have been working on cloud infrastructure setup, especially on AWS since 8+ years
- You have set up and maintained reliable systems that operate at high scale
- You have experience in hardening and securing cloud infrastructures
- You have a solid understanding of computer networking, network security and CDNs
- Extensive experience in AWS, Kubernetes and optionally Terraform
- Experience in building automation tools for code build and deployment (preferably in JS)
- You understand the hustle of a startup and are good with handling ambiguity
- You are curious, a quick learner and someone who loves to experiment
- You insist on highest standards of quality, maintainability and performance
- You work well in a team to enhance your impact

About GoodWorker
GoodWorker, is an exciting new technology start-up, with a social agenda, set up with one dream in mind; help transform the lives of blue-collar workers the world over, by leveraging data decentralized technologies; our focus is to start with helping them advance their careers and ultimately to enable them to improve the quality of lives for themselves & their families.
GoodWorker is a joint venture between SchoolNet & LemmaTree (a 100% subsidiary of Temasek): we will talk about this in a bit, about how each of these names add to our story. We raised 35 Mn USD in late 2020. We then used 2021 to build our founding team (headquartered in Bangalore), invested in deep consumer & market research to understand the lives of blue-collar workers and identified opportunities to do better by them. We also launched & scaled a biz model that today, leverages a marketplace of hiring partners across India, to help workers not only discover jobs across sectors but also handhold them till they finally join employers of their choice.
Building on the insights and expertise developed during this period, we have launched our ‘career builder marketplace’ comprising of multiple technology platforms.
Jobs classified; our marketplace that help our workers discover jobs & trusted employers while our
L.earn a creator- led learning community platform will engage with workers to build & upgrade their job & life skills. Together these platforms will engage with the millions of blue-collar workers that is one step in the right direction towards realizing our vision. And to achieve this, we are rapidly looking to hire passionate & top-quality professionals (we are already a 140+ strong team and counting)
Check out our app here: https://play.google.com/store/apps/details?id=in.goodworker.jobsearch.app
Why should you join us?
Don’t join us because you are looking for another great opportunity, we are a tribe that takes pride & passion very seriously in solving the most challenging & complex problem not only for India but for the world which is to Transform billion lives globally. If this sounds interesting read on…
We are a company that wants to do our bit to “Uplift & Elevate” lives of a billion worker. Here is your opportunity to create an impact.
GoodWorker is on a mission to transforming lives of billions. This is made in India, for India and beyond
(World is our canvas to create an impact) and we shall not settle until we have reached our North star which is to “Democratize access to better lives by empowering a billion works globally”. Sounds very aspirational, isn’t it? Think big or go home is the motto, we are here to impact for long term and unless we bring 10x and growth mindset we shall not be able to solve tough problems. We believe “Tough people like to solve some of the toughest problems in the world”
- It needs resilience, grit, and passion to achieve such bold Are you game for it?
If you are, then we are too.
Our Promise to you
- You will not just be a part but will create legacy because we are building a futuristic company which is at a sweet intersection point of web 3 technologies.
- You will be part of something bigger as we believe in serving not just India but beyond so that we are impacting at scale.
- You will be part of a culture that thrives to deliver on outcomes for our most important customer “blue-collar workers”, believes in creativity, innovation, and long-term thinking. Growth Mindset is the way we think.
- It is our company, not only we do good, but we are focused on wealth creation for its stakeholders and employees as one of the most important stakeholders. Equity for everyone that’s our philosophy.
- You will work with the best - like minds! A tribe that is breaking boundaries
- We are here to succeed! Invested in it for long
Similar jobs
We are a managed video calling solution built on top of WebRTC, which allows it's
customers to integrate live video-calls within existing solutions in less than 10 lines of
code. We provide a completely managed SDK that solves the problem of battling
endless cases of video calling APIs.
Location - Bangalore (Remote)
Experience - 3+ Years
Requirements:
● Should have at least 2+ years of DevOps experience
● Should have experience with Kubernetes
● Should have experience with Terraform/Helm
● Should have experience in building scalable server-side systems
● Should have experience in cloud infrastructure and designing databases
● Having experience with NodeJS/TypeScript/AWS is a bonus
● Having experience with WebRTC is a bonus

As part of the Cloud Platform / Devops team at Upswing, you will get to work on building state-of-the-art infrastructure for the future. You will also be –
- Building Infrastructure on AWS driven through terraform and building automation tools for deployment, infrastructure management, and observability stack
- Building and Scaling on Kubernetes
- Ensuring the Security of Upswing Cloud Infra
- Building Security Checks and automation to improve overall security posture
- Building automation stack for components like JVM-based applications, Apache Pulsar, MongoDB, PostgreSQL, Reporting Infra, etc.
- Mentoring people across the teams to enable best practices
- Mentoring and guiding team members to upskill and helm them develop work class Fintech Infrastructure
What will you do if you join us?
- Write a lot of code
- Engage in a lot of cross-team collaboration to independently drive forward infrastructure initiatives and Devops practices across the org
- Taking Ownership of existing, ongoing, and future initiatives
- Plan Architecture- for upcoming infrastructure
- Build for Scale, Resiliency & Security
- Introduce best practices wrt Devops & Cloud in the team
- Mentor new/junior team members and eventually build your own team
You should have
- Curiosity for on-the-job learning and experimenting with new technologies and ideas
- A strong background in Linux environment
- Must have Programming skills and Experience
- Strong experience in Cloud technologies, Security and Networking concepts, Multi-cloud environments, etc.
- Experience with at least one scripting language (GoLang/Python/Ruby/Groovy)
- Experience in Terraform is highly desirable but not mandatory
- Experience with Kubernetes and Docker is required
- Understanding of the Java Technologies and Stack
- Any other Devops related experience will be considered
You will work on:
You will be working on our client’s massive scale Infrastructure and CloudOps requirements. You will be directly working on the customer's cloud environment and managing their product infra life cycle.
What you will do (Responsibilities):
Understand how the product works and how it is used by the customers
Day-to-day operational support of product infrastructure on all three major public cloud services
Interact with customers on/off-site to troubleshoot issues, provide workarounds by leveraging your troubleshooting skills
Create and manage processes for the secure operation of customer cloud environments
System (Windows, Linux, and SQL Databases) and Network administrations
Use incident management tools used in incident analysis and troubleshooting. Identify, escalate, and communicate issues in a timely manner.
Contribute to building a knowledge base centered on known incidents/defects, Frequently Asked Questions, resolved issues, applying lessons learned and previous resolutions to new incidents.
Build hosting environments for clients all over the world
Collaborate with product managers and engineers to ensure that critical and time-sensitive projects run smoothly and achieve the business outcome
On-call responsibilities to respond to emergency situations and scheduled maintenance
Contribute to and maintain documentation for systems, processes, procedures, and infrastructure configuration.
What you bring (Skills):
Minimum 2 years of relevant experience in cloudops (mainly into MS Azure), infrastructure management, and support
Strong Windows Systems, SQL Database, and Network administration skills
Strong understanding of cloud applications, concepts, and the storage, compute, and networking components
Readiness to work in 24x7 rotational shift model
Scripting in PowerShell, Linux shell, Python or C#
Proactive and self-motivated, committed to achieving deadlines, meeting SLAs, and producing results
Excellent customer service and people skills. Excellent analytical, written, and oral communication and relationship building skills
Ability to be a good listener, and to understand customer issues. Ability to provide innovative workarounds or design a solution to fix a customer’s problem
Great if you know (Skills):
AzureOps:
Comfortable using TFS (Team Foundation Server) and Azure DevOps workflows
Comfortable with Visual Studio and SQL Server Data Tools
Exposure to deployment using Azure cloud console and ARM templates
Experience in handling Azure AD & RBAC related activities will be a plus
Comfortable with basic DevOps concepts like CI/CD, and Infra-automation
Advantage Cognologix:
A higher degree of autonomy, startup culture & small teams
Opportunities to become an expert in emerging technologies
Remote working options for the right maturity level
Competitive salary & family benefits
Performance-based career advancement
About Cognologix:
Cognologix helps companies disrupt by reimagining their business models and innovate like a Startup. We are at the forefront of digital disruption and take a business-first approach to help meet our client’s strategic goals.
We are DevOps focused organization helping our clients focus on their core product activities by handling all aspects of their infrastructure, integration and delivery.
Benefits Working With Us:
- Health & Wellbeing
- Learn & Grow
- Evangelize
- Celebrate Achievements
- Financial Wellbeing
- Medical and Accidental cover.
- Flexible Working Hours.
- Sports Club & much more.
mavQ is seeking a motivated Lead DevOps Engineer to join our team. mavQ is a low-code Artificial Intelligence platform that enables organizations to build fast, cohesive, and user-friendly applications to digitally transform while creating valuable insights. mavQ offers AI-enabled products such as Intelligent document processing, a contact center platform, a suite of electronic research administration tools, DataX- a data orchestration and pipeline tool with a built-in machine learning workbench, and more. mavQ also offers professional services for full-stack development and AI services on a multi-cloud ecosystem. Learn more...
Work Location: Hyderabad, India.
What You'll Do:
- Lead the entire DevOps practice of mavQ and lead the team in managing the infrastructure and CI/CD ecosystem.
- Creating suitable DevOps channels across the organization.
- Conceptualize and manage tools and services to be used by the organization and by external users of the platform.
- Automate all operational and repetitive tasks to improve the efficiency and productivity of all development teams.
- Research and propose new solutions to improve the mavQ platform in aspects of speed, scalability, and security.
- Automate and manage the cloud infrastructure of the organization distributed across the globe and across multiple cloud providers such as Google Cloud and AWS.
- Manage CI/CD, Source Control and IAM services for the organization.
- Ensure thorough logging, monitoring, and alerting for all services and code running in the organization.
- Work with development teams on communications and protocols for distributed microservices.
- Oversee the SRE practices and define KPIs and SLAs for various services provided by the DevOps team.
- Must possess ample knowledge and experience in system automation, deployment, and implementation.
- Must have hands-on and thorough knowledge of using Linux, Kubernetes and Docker, Jenkins.
What You’ll Bring:
- Experience maintaining and deploying highly-available, fault-tolerant systems at scale
- Practical experience with containerization and clustering (Kubernetes / ECS / Docker / OpenShift)
- Version control system experience (e.g. Git, SVN)
- Experience implementing CI/CD (e.g. Jenkins, TravisCI)
- Experience with configuration management tools (e.g. Ansible, Chef)
- Experience with infrastructure-as-code (e.g. Terraform, Cloudformation)
- Expertise with AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda)
- Container Registry Solutions (Harbor, JFrog, Quay etc)
- Operational (e.g. HA/Backups) NoSQL experience (e.g. Cassandra, MongoDB, Redis)
- Good understanding on Kubernetes Networking & Security best practices
- Monitoring Tools like DataDog, or any other open-source tool like Prometheus, Nagios
- Load Balancer Knowledge (AVI Networks, NGINX)
Who You Are:
- 8 to 12 years of experience in Application development using Java, Spring and frontend angular etc., with experience in deploying to and managing it on Cloud infrastructure mainly AWS / GCP.
- Experience with managing cloud platforms using Terraform, Ansible and Packers.
- Ability to provide expert strategic advice for cloud application development/ deployment, private versus public cloud options and virtualization.
- Strong understanding of DevOps practices/tools including CICD Pipelines, IaC, SSO, Monitoring, Orchestration.
- Experience in managing the delivery of a high-availability infrastructure to support mission-critical systems and services.
- Architecting design, deploying, ensuring high scalability and high availability, cloud security implementation.
- Review and/or analyze and develop architectural requirements at domain level within product portfolio, team, or partnership engagement.
- Experience in handling Complex IT Infrastructure Solution Design & Implementation.
- Identify, document, triage and track issues to ensure resolution.
- Excellent communication & interpersonal skills, effective problem-solving skills and logical thinking ability and a strong commitment to professional and customer service excellence.
- Excellent teamwork skills & the ability to direct efforts of cross-functional teams for collaborative propositions.
Good to have:
- Experience in the software development process and tools and languages like Shell scripting, GoLang, Python, and Git.
- Knowledge in handling distributed services like ElasticSearch, Kafka, MongoDB, etc.
- Experience working in a consulting environment.
- Synthesis and analytical skills.
What we offer:
- Group Medical Insurance (Family Floater Plan - Self + Spouse + 2 Dependent Children)
- Sum Insured: INR 5,00,000/-
- Maternity cover upto two children
- Inclusive of COVID-19 Coverage
- Cashless & Reimbursement facility
- Access to free online doctor consultation
- Personal Accident Policy (Disability Insurance) -
- Sum Insured: INR. 25,00,000/- Per Employee
- Accidental Death and Permanent Total Disability is covered up to 100% of Sum Insured
- Permanent Partial Disability is covered as per the scale of benefits decided by the Insurer
- Temporary Total Disability is covered
- An option of Paytm Food Wallet (up to Rs. 2500) as a tax saver benefit
- Monthly Internet Reimbursement of upto Rs. 1,000
- Opportunity to pursue Executive Programs / courses at top universities globally
- Professional Development opportunities through various sponsored certifications on multiple technology stacks including Google Cloud, Amazon & others.
***
Company Name: Petpooja!
Location: Ahmedabad
Designation: DevOps Engineer
Experience: Between 2 to 7 Years
Candidates from Ahmedabad will be preferred
Job Location: Ahmedabad
Job Responsibilities: - -
- Planned, implement, and maintain the software development infrastructure.
- Introduce and oversee software development automation across cloud providers like AWS and Azure
- Help develop, manage, and monitor continuous integration and delivery systems
- Collaborate with software developers, QA specialists, and other team members to ensure the timely and successful delivery of new software releases
- Contribute to software design and development, including code review and feedback
- Assist with troubleshooting and problem-solving when issues arise
- Keep up with the latest industry trends and best practices while ensuring the company meets configuration requirements
- Participate in team improvement initiatives
- Help create and maintain internal documentation using Git or other similar applications
- Provide on-call support as needed
Qualification Required:
1. You should have Experience handling various services on the AWS cloud.
2. Previous experience as a Site reliability engineer would be an advantage.
3. You will be well versed with various commands and hands-on with Linux, Ubuntu administration, and other aspects of the Software development team requirement.
4. At least 2 to 7 years of experience with managing AWS Services such as Auto Scaling, Route 53, and various other internal networks.
5. Would recommend if having an AWS Certification.
Numerator is a data and technology company reinventing market research. Headquartered in Chicago, IL, Numerator has 1,600 employees worldwide. The company blends proprietary data with advanced technology to create unique insights for the market research industry that has been slow to change. The majority of Fortune 100 companies are Numerator clients.
Job Description
What We Do and How?
We are a market research company, revolutionizing how it's done! We mix fast paced development and unique approaches to bring best practices and strategy to our technology. Our tech stack is deep, leveraging several languages and frameworks including Python, C#, Java, Kotlin, React, Angular, and Django among others. Our engineering hurdles sit at the intersection of technologies ranging from mobile, computer vision and crowdsourcing, to machine learning and big data analytics.
Our Team
From San Francisco to Chicago to Ottawa, our R&D team is comprised of talented individuals spanning across a robust tech stack. The R&D team is comprised of product, data analytics, engineers across Front End, Back End, DevOps, Business Intelligence, ETL, Data Science, Mobile Apps, and much more. Across these different groups we work towards one common goal: To build products into efficient and seamless user experiences that help our clients succeed.
Numerator is looking for a Infrastructure Engineer to join our growing team. This is a unique opportunity where you will get a chance to work with an established and rapidly evolving platforms that handles millions of requests and massive amounts of data. In this position, you will be responsible for taking on new initiatives to automate, enhance, maintain, and scale services in a rapidly-scaling SaaS environment.
As a member of our team, you will make an immediate impact as you help build out and expand our technology platforms across several software products. This is a fast-paced role with high growth, visibility, impact, and where many of the decisions for new projects will be driven by you and your team from inception through production.
Some of the technologies we frequently use include: Terraform, Ansible, SumoLogic, Kubernetes, and many AWS-native services.
• Develop and test the cloud infrastructure to scale a rapidly growing ecosystem.
• Monitor and improve DevOps tools and processes, automate mundane tasks, and improve system reliability.
• Provide deep expertise to help steer scalability and stability improvements early in the life-cycle of development while working with the rest of the team to automate existing processes that deploy, test, and lead our production environments.
• Train teams to improve self-healing and self-service cloud-based ecosystems in an evolving AWS infrastructure.
• Build internal tools to demonstrate performance and operational efficiency.
• Develop comprehensive monitoring solutions to provide full visibility to the different platform components using tools and services like Kubernetes, Sumologic, Prometheus, Grafana.
• Identify and troubleshoot any availability and performance issues at multiple layers of deployment, from hardware, operating environment, network, and application.
• Work cross-functionally with various teams to improve Numerator’s infrastructure through automation.
• Work with other teams to assist with issue resolutions related to application configuration, deployment, or debugging.
• Lead by example and evangelize DevOps best practice within other engineering teams at Numerator.
Skills & Requirements
What you bring
• A minimum of 3 years of work experience in backend software, DevOps, or a related field.
• A passion for software engineering, automation and operations and are excited about reliability, availability and performance.
• Availability to participate in after-hours on-call support with your fellow engineers.
• Strong analytical and problem-solving mindset combined with experience troubleshooting large scale systems.
• Fundamental knowledge in networking; operating systems; package build system (IP subnets and routing, ACL’s, Core Ubuntu, PIP and NPM).
• Experience with automation technologies to build, deploy and integrate both infrastructure and applications (e.g., Terraform, Ansible).
• Experience using scripting languages like Python and *nix tools (Bash, sed/awk, Make).
• You enjoy developing and managing real-time distributed platforms and services that scale billions of requests.
• Have the ability to manage multiple systems across stratified environments.
• A deep enthusiasm for the Cloud and DevOps and keen to get other people involved.
• Experience with scaling and operationalizing distributed data stores, file systems and services.
• Running services in AWS or other cloud platforms, strong experience with Linux systems.
• Experience in modern software paradigms including cloud applications and serverless architectures.
• You look ahead to identify opportunities and foster a culture of innovation.
• BS, MS or Ph.D. in Computer Science or a related field, or equivalent work experience.
Nice to haves
• Previous experience working with a geographically distributed software engineering team.
• Experience working with Jenkins or Circle-CI
• Experience with storage optimizations and management
• Solid understanding of building scalable, highly performant systems and services
• Expertise with big data, analytics, machine learning, and personalization.
• Start-up or CPG industry experience
If this sounds like something you would like to be part of, we’d love for you to apply! Don't worry if you think that you don't meet all the qualifications here. The tools, technology, and methodologies we use are constantly changing and we value talent and interest over specific experience.
Disclaimer: We do not charge any fee for employment and the same applies to the Recruitment Partners who we work with. Numerator is an equal opportunity employer. Employment decisions are based on merit. Additionally, we do not ask for any refundable security deposit to be paid in bank accounts for employment purposes. We request candidates to be cautious of misleading communications and not pay any fee/ deposit to individuals/ agencies/ employment portals on the pretext of attending Numerator interview process or seeking employment with us. These would be fraudulent in nature. Anyone dealing with such individuals/agencies/
We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law.
A.P.T Portfolio, a high frequency trading firm that specialises in Quantitative Trading & Investment Strategies.Founded in November 2009, it has been a major liquidity provider in global Stock markets.
As a manager, you would be incharge of managing the devops team and your remit shall include the following
- Private Cloud - Design & maintain a high performance and reliable network architecture to support HPC applications
- Scheduling Tool - Implement and maintain a HPC scheduling technology like Kubernetes, Hadoop YARN Mesos, HTCondor or Nomad for processing & scheduling analytical jobs. Implement controls which allow analytical jobs to seamlessly utilize ideal capacity on the private cloud.
- Security - Implementing best security practices and implementing data isolation policy between different divisions internally.
- Capacity Sizing - Monitor private cloud usage and share details with different teams. Plan capacity enhancements on a quarterly basis.
- Storage solution - Optimize storage solutions like NetApp, EMC, Quobyte for analytical jobs. Monitor their performance on a daily basis to identify issues early.
- NFS - Implement and optimize latest version of NFS for our use case.
- Public Cloud - Drive AWS/Google-Cloud utilization in the firm for increasing efficiency, improving collaboration and for reducing cost. Maintain the environment for our existing use cases. Further explore potential areas of using public cloud within the firm.
- BackUps - Identify and automate back up of all crucial data/binary/code etc in a secured manner at such duration warranted by the use case. Ensure that recovery from back-up is tested and seamless.
- Access Control - Maintain password less access control and improve security over time. Minimize failures for automated job due to unsuccessful logins.
- Operating System -Plan, test and roll out new operating system for all production, simulation and desktop environments. Work closely with developers to highlight new performance enhancements capabilities of new versions.
- Configuration management -Work closely with DevOps/ development team to freeze configurations/playbook for various teams & internal applications. Deploy and maintain standard tools such as Ansible, Puppet, chef etc for the same.
- Data Storage & Security Planning - Maintain a tight control of root access on various devices. Ensure root access is rolled back as soon the desired objective is achieved.
- Audit access logs on devices. Use third party tools to put in a monitoring mechanism for early detection of any suspicious activity.
- Maintaining all third party tools used for development and collaboration - This shall include maintaining a fault tolerant environment for GIT/Perforce, productivity tools such as Slack/Microsoft team, build tools like Jenkins/Bamboo etc
Qualifications
- Bachelors or Masters Level Degree, preferably in CSE/IT
- 10+ years of relevant experience in sys-admin function
- Must have strong knowledge of IT Infrastructure, Linux, Networking and grid.
- Must have strong grasp of automation & Data management tools.
- Efficient in scripting languages and python
Desirables
- Professional attitude, co-operative and mature approach to work, must be focused, structured and well considered, troubleshooting skills.
- Exhibit a high level of individual initiative and ownership, effectively collaborate with other team members.
APT Portfolio is an equal opportunity employer
As an Infrastructure Engineer at Navi, you will be building a resilient infrastructure platform, using modern Infrastructure engineering practices.
You will be responsible for the availability, scaling, security, performance and monitoring of the navi Cloud platform. You’ll be joining a team that follows best practices in infrastructure as code
Your Key Responsibilities
- Build out the Infrastructure components like API Gateway, Service Mesh, Service Discovery, container orchestration platform like kubernetes.
- Developing reusable Infrastructure code and testing frameworks
- Build meaningful abstractions to hide the complexities of provisioning modern infrastructure components
- Design a scalable Centralized Logging and Metrics platform
- Drive solutions to reduce Mean Time To Recovery(MTTR), enable High Availability.
What to Bring
- Good to have experience in managing large scale cloud infrastructure, preferable AWS and Kubernetes
- Experience in developing applications using programming languages like Java, Python and Go
- Experience in handling logs and metrics at a high scale.
- Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive.
- Experience working on Linux based infrastructure
- Strong hands-on knowledge of setting up production, staging, and dev environments on AWS/GCP/Azure
- Strong hands-on knowledge of technologies like Terraform, Docker, Kubernetes
- Strong understanding of continuous testing environments such as Travis-CI, CircleCI, Jenkins, etc.
- Configuration and managing databases such as MySQL, Mongo
- Excellent troubleshooting
- Working knowledge of various tools, open-source technologies, and cloud services
- Awareness of critical concepts in DevOps and Agile principles

