Why you should join us
- You will join the mission to create positive impact on millions of peoples lives
- You get to work on the latest technologies in a culture which encourages experimentation - You get to work with super humans (Psst: Look up these super human1, super human2, super human3, super human4)
- You get to work in an accelerated learning environment
What you will do
- You will provide deep technical expertise to your team in building future ready systems.
- You will help develop a robust roadmap for ensuring operational excellence
- You will setup infrastructure on AWS that will be represented as code
- You will work on several automation projects that provide great developer experience
- You will setup secure, fault tolerant, reliable and performant systems
- You will establish clean and optimised coding standards for your team that are well documented
- You will set up systems in a way that are easy to maintain and provide a great developer experience
- You will actively mentor and participate in knowledge sharing forums
- You will work in an exciting startup environment where you can be ambitious and try new things :)
You should apply if
- You have a strong foundation in Computer Science concepts and programming fundamentals
- You have been working on cloud infrastructure setup, especially on AWS since 8+ years
- You have set up and maintained reliable systems that operate at high scale
- You have experience in hardening and securing cloud infrastructures
- You have a solid understanding of computer networking, network security and CDNs
- Extensive experience in AWS, Kubernetes and optionally Terraform
- Experience in building automation tools for code build and deployment (preferably in JS)
- You understand the hustle of a startup and are good with handling ambiguity
- You are curious, a quick learner and someone who loves to experiment
- You insist on highest standards of quality, maintainability and performance
- You work well in a team to enhance your impact
About GoodWorker
About
GoodWorker, is an exciting new technology start-up, with a social agenda, set up with one dream in mind; help transform the lives of blue-collar workers the world over, by leveraging data decentralized technologies; our focus is to start with helping them advance their careers and ultimately to enable them to improve the quality of lives for themselves & their families.
GoodWorker is a joint venture between SchoolNet & LemmaTree (a 100% subsidiary of Temasek): we will talk about this in a bit, about how each of these names add to our story. We raised 35 Mn USD in late 2020. We then used 2021 to build our founding team (headquartered in Bangalore), invested in deep consumer & market research to understand the lives of blue-collar workers and identified opportunities to do better by them. We also launched & scaled a biz model that today, leverages a marketplace of hiring partners across India, to help workers not only discover jobs across sectors but also handhold them till they finally join employers of their choice.
Building on the insights and expertise developed during this period, we have launched our ‘career builder marketplace’ comprising of multiple technology platforms.
Jobs classified; our marketplace that help our workers discover jobs & trusted employers while our
L.earn a creator- led learning community platform will engage with workers to build & upgrade their job & life skills. Together these platforms will engage with the millions of blue-collar workers that is one step in the right direction towards realizing our vision. And to achieve this, we are rapidly looking to hire passionate & top-quality professionals (we are already a 140+ strong team and counting)
Check out our app here: https://play.google.com/store/apps/details?id=in.goodworker.jobsearch.app
Why should you join us?
Don’t join us because you are looking for another great opportunity, we are a tribe that takes pride & passion very seriously in solving the most challenging & complex problem not only for India but for the world which is to Transform billion lives globally. If this sounds interesting read on…
We are a company that wants to do our bit to “Uplift & Elevate” lives of a billion worker. Here is your opportunity to create an impact.
GoodWorker is on a mission to transforming lives of billions. This is made in India, for India and beyond
(World is our canvas to create an impact) and we shall not settle until we have reached our North star which is to “Democratize access to better lives by empowering a billion works globally”. Sounds very aspirational, isn’t it? Think big or go home is the motto, we are here to impact for long term and unless we bring 10x and growth mindset we shall not be able to solve tough problems. We believe “Tough people like to solve some of the toughest problems in the world”
- It needs resilience, grit, and passion to achieve such bold Are you game for it?
If you are, then we are too.
Our Promise to you
- You will not just be a part but will create legacy because we are building a futuristic company which is at a sweet intersection point of web 3 technologies.
- You will be part of something bigger as we believe in serving not just India but beyond so that we are impacting at scale.
- You will be part of a culture that thrives to deliver on outcomes for our most important customer “blue-collar workers”, believes in creativity, innovation, and long-term thinking. Growth Mindset is the way we think.
- It is our company, not only we do good, but we are focused on wealth creation for its stakeholders and employees as one of the most important stakeholders. Equity for everyone that’s our philosophy.
- You will work with the best - like minds! A tribe that is breaking boundaries
- We are here to succeed! Invested in it for long
Connect with the team
Similar jobs
Our organization relies on its central engineering workforce to develop and maintain a product portfolio of several different startups. As part of our engineering, you'll work on several products every quarter. Our product portfolio continuously grows as we incubate more startups, which means that different products are very likely to make use of different technologies, architecture & frameworks - a fun place for smart tech lovers!
Responsibilities:
- 3 - 5 years of experience working in DevOps/DevSecOps.
- Strong hands-on experience in deployment, administration, monitoring, and hosting services of Kubernetes.
- Ensure compliance with security policies and best practices for RBAC (Kubernetes) and AWS services like EKS, VPC, IAM, EC2, Route53, and S3.
- Implement and maintain CI/CD pipelines for continuous integration and deployment.
- Monitor and troubleshoot issues related to infrastructure and application security.
- Develop and maintain automation scripts using Terraform for infrastructure as code.
- Hands-on experience in configuring and managing a service mesh like Istio.
- Experience working in Cloud, Agile, CI/CD, and DevOps environments. We live in the Cloud.
- Experience with Jenkins, Google Cloud Build, or similar.
- Good to have experience with using PAAS and SAAS services from AWS like Big Query, Cloud Storage, S3, etc.,
- Good to have experience with configuring, scaling, and monitoring database systems like PostgreSQL, MySQL, MongoDB, and so on.
Requirements:
- Bachelor's degree in Computer Science, Information Technology, or a related field.
- 3 to 7 years of experience in DevSecOps or a related field.
- Great hands-on experience in Kubernetes, EKS, AWS services like VPC, IAM, EC2, and S3, Terraform, terraform backed and ternary resources of Terraform, and CICD tools like Jenkins.
- Strong understanding of DevOps principles and practices.
- Experience in developing and maintaining automation scripts using Terraform.
- Proficiency in any Linux scripting language.
- Familiarity with security tools and technologies such as WAF, IDS/IPS, and SIEM.
- Excellent problem-solving skills and ability to work independently and within a team.
- Good communication skills and ability to work collaboratively with cross-functional teams.
- Availability to join immediately to 15 days in Pune, Bangalore, or Noida locations.
About CAW Studios:
CAW Studios is a Product Engineering Studio. WE BUILD TRUE PRODUCT TEAMS for our clients. Each team is a small, well-balanced group of geeks and a product manager who together produce relevant and high-quality products. We believe the product development process needs to be fixed as most development companies operate as IT Services. Unlike IT Apps, product development requires Ownership, Creativity, Agility, and design that scales.
Know More About CAW Studios:
Website: https://www.cawstudios.com/">https://www.cawstudios.com/
Benefits: Startup culture, powerful Laptop, free snacks, cool office, flexible work hours and work-from-home policy, medical insurance, and most importantly - the opportunity to work on challenging cutting-edge problems.
Candidate must have a minimum of 8+ years of IT experience
IST time zone.
candidates should have hands-on experience in
DevOps (GitLab, Artifactory, SonarQube, AquaSec, Terraform & Docker / K8")
Thanks & Regards,
Anitha. K
TAG Specialist
Job Description: DevOps Engineer
About Hyno:
Hyno Technologies is a unique blend of top-notch designers and world-class developers for new-age product development. Within the last 2 years we have collaborated with 32 young startups from India, US and EU to to find the optimum solution to their complex business problems. We have helped them to address the issues of scalability and optimisation through the use of technology and minimal cost. To us any new challenge is an opportunity.
As part of Hyno’s expansion plans,Hyno, in partnership with Sparity, is seeking an experienced DevOps Engineer to join our dynamic team. As a DevOps Engineer, you will play a crucial role in enhancing our software development processes, optimising system infrastructure, and ensuring the seamless deployment of applications. If you are passionate about leveraging cutting-edge technologies to drive efficiency, reliability, and scalability in software development, this is the perfect opportunity for you.
Position: DevOps Engineer
Experience: 5-7 years
Responsibilities:
- Collaborate with cross-functional teams to design, develop, and implement CI/CD pipelines for automated application deployment, testing, and monitoring.
- Manage and maintain cloud infrastructure using tools like AWS, Azure, or GCP, ensuring scalability, security, and high availability.
- Develop and implement infrastructure as code (IaC) using tools like Terraform or CloudFormation to automate the provisioning and management of resources.
- Constantly evaluate continuous integration and continuous deployment solutions as the industry evolves, and develop standardised best practices.
- Work closely with development teams to provide support and guidance in building applications with a focus on scalability, reliability, and security.
- Perform regular security assessments and implement best practices for securing the entire development and deployment pipeline.
- Troubleshoot and resolve issues related to infrastructure, deployment, and application performance in a timely manner.
- Follow regulatory and ISO 13485 requirements.
- Stay updated with industry trends and emerging technologies in the DevOps and cloud space, and proactively suggest improvements to current processes.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or related field (or equivalent work experience).
- Minimum of 5 years of hands-on experience in DevOps, system administration, or related roles.
- Solid understanding of containerization technologies (Docker, Kubernetes) and orchestration tools
- Strong experience with cloud platforms such as AWS, Azure, or GCP, including services like ECS, S3, RDS, and more.
- Proficiency in at least one programming/scripting language such as Python, Bash, or PowerShell..
- Demonstrated experience in building and maintaining CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or CircleCI.
- Familiarity with configuration management tools like Ansible, Puppet, or Chef.
- Experience with container (Docker, ECS, EKS), serverless (Lambda), and Virtual Machine (VMware, KVM) architectures.
- Experience with infrastructure as code (IaC) tools like Terraform, CloudFormation, or Pulumi.
- Strong knowledge of monitoring and logging tools such as Prometheus, ELK stack, or Splunk.
- Excellent problem-solving skills and the ability to work effectively in a fast-paced, collaborative environment.
- Strong communication skills and the ability to work independently as well as in a team.
Nice to Have:
- Relevant certifications such as AWS Certified DevOps Engineer, Azure DevOps Engineer, Certified Kubernetes Administrator (CKA), etc.
- Experience with microservices architecture and serverless computing.
Soft Skills:
- Excellent written and verbal communication skills.
- Ability to manage conflict effectively.
- Ability to adapt and be productive in a dynamic environment.
- Strong communication and collaboration skills supporting multiple stakeholders and business operations.
- Self-starter, self-managed, and a team player.
Join us in shaping the future of DevOps at Hyno in collaboration with Sparity. If you are a highly motivated and skilled DevOps Engineer, eager to make an impact in a remote setting, we'd love to hear from you.
DevOps Engineer
The DevOps team is one of the core technology teams of Lumiq.ai and is responsible for managing network activities, automating Cloud setups and application deployments. The team also interacts with our customers to work out solutions. If you are someone who is always pondering how to make things better, how technologies can interact, how various tools, technologies, and concepts can help a customer or how you can use various technologies to improve user experience, then Lumiq is the place of opportunities.
Job Description
- Explore about the newest innovations in scalable and distributed systems.
- Helps in designing the architecture of the project, solutions to the existing problems and future improvements to be done.
- Make the cloud infrastructure and services smart by implementing automation and trigger based solutions.
- Interact with Data Engineers and Application Engineers to create continuous integration and deployment frameworks and pipelines.
- Playing around with large clusters on different clouds to tune your jobs or to learn.
- Researching about new technologies, proving the concepts and planning how to integrate or update.
- Be part of discussions of other projects to learn or to help.
Responsibilities
- 2+years of experience as DevOps Engineer.
- You understand actual networking to Software defined networking.
- You like containers and open source orchestration system like Kubernetes, Mesos.
- Should have experience to secure system by creating robust access policy and network restrictions enforcement.
- Should have knowledge about how applications work are very important to design distributed systems.
- Should have experience to open source projects and have discussed the shortcomings or problems with the community on several occasions.
- You understand that provisioning a Virtual Machine is not DevOps.
- You know you are not a SysAdmin but DevOps Engineer who is the person behind developing operations for the system to run efficiently and scalably.
- Exposure on Private Cloud, Subnets, VPNs, Peering, Load Balancers and have worked with them.
- You check logs before screaming about error.
- Multiple Screens makes you more efficient.
- You are a doer who don’t say the word impossible.
- You understand the value of documentation of your work.
- You understand the Big Data ecosystem and how can you leverage cloud for it.
- You know these buddies - #airflow, #aws, #azure, #gcloud, #docker, #kubernetes, #mesos, #acs
Experienced with Azure DevOps, CI/CD and Jenkins.
Experience is needed in Kubernetes (AKS), Ansible, Terraform, Docker.
Good understanding in Azure Networking, Azure Application Gateway, and other Azure components.
Experienced Azure DevOps Engineer ready for a Senior role or already at a Senior level.
Demonstrable experience with the following technologies:
Microsoft Azure Platform As A Service (PaaS) product such as Azure SQL, AppServices, Logic Apps, Functions and other Serverless services.
Understanding of Microsoft Identity and Access Management products such including Azure AD or AD B2C.
Microsoft Azure Operational and Monitoring tools, including Azure Monitor, App Insights and Log Analytics.
Knowledge of PowerShell, GitHub, ARM templates, version controls/hotfix strategy and deployment automation.
Ability and desire to quickly pick up new technologies, languages, and tools
Excellent communication skills and Good team player.
Passionate about code quality and best practices is an absolute must
Must show evidence of your passion for technology and continuous learning
- Responsible for the entire infrastructure including Production (both bare metal and AWS).
- Manage and maintain the production systems and operations including SysAdmin, DB activities.
- Improve tools and processes, automate manual efforts, and maintain the health of the system.
- Champion best practices, CI-CD, Metrics Driven Development
- Optimise the company's computing architecture
- Conduct systems tests for security, performance, and availability
- Maintain security of the system
- Develop and maintain design and troubleshooting documentation
- 7+ years of experience into DevOps/Technical Operations
- Extensive experience in operating scripting language like shell, python, etc
- Experience in developing and maintaining CI/CD process for SaaS applications using tools such as Jenkins
- Hands on experience in using configuration management tools such as Puppet, SaltStack, Ansible, etc
- Hands-on experience to build and handle VMs, Containers utilizing tools such as Kubernetes, Docker, etc
- Hands on experience in building, designing and maintaining cloud-based applications with AWS, Azure,GCP, etc
- Knowledge of Databases (MySQL, NoSQL)
- Knowledge of security/ethical hacking
- Have experience with ElasticSearch, Kibana, LogStash
- Have experience with Cassandra, Hadoop, or Spark
- Have experience with Mongo, Hive
Experience and Education
• Bachelor’s degree in engineering or equivalent.
Work experience
• 4+ years of infrastructure and operations management
Experience at a global scale.
• 4+ years of experience in operations management, including monitoring, configuration management, automation, backup, and recovery.
• Broad experience in the data center, networking, storage, server, Linux, and cloud technologies.
• Broad knowledge of release engineering: build, integration, deployment, and provisioning, including familiarity with different upgrade models.
• Demonstratable experience with executing, or being involved of, a complete end-to-end project lifecycle.
Skills
• Excellent communication and teamwork skills – both oral and written.
• Skilled at collaborating effectively with both Operations and Engineering teams.
• Process and documentation oriented.
• Attention to details. Excellent problem-solving skills.
• Ability to simplify complex situations and lead calmly through periods of crisis.
• Experience implementing and optimizing operational processes.
• Ability to lead small teams: provide technical direction, prioritize tasks to achieve goals, identify dependencies, report on progress.
Technical Skills
• Strong fluency in Linux environments is a must.
• Good SQL skills.
• Demonstratable scripting/programming skills (bash, python, ruby, or go) and the ability to develop custom tool integrations between multiple systems using their published API’s / CLI’s.
• L3, load balancer, routing, and VPN configuration.
• Kubernetes configuration and management.
• Expertise using version control systems such as Git.
• Configuration and maintenance of database technologies such as Cassandra, MariaDB, Elastic.
• Designing and configuration of open-source monitoring systems such as Nagios, Grafana, or Prometheus.
• Designing and configuration of log pipeline technologies such as ELK (Elastic Search Logstash Kibana), FluentD, GROK, rsyslog, Google Stackdriver.
• Using and writing modules for Infrastructure as Code tools such as Ansible, Terraform, helm, customize.
• Strong understanding of virtualization and containerization technologies such as VMware, Docker, and Kubernetes.
• Specific experience with Google Cloud Platform or Amazon EC2 deployments and virtual machines.c
This person MUST have:
- Min of 3-5 prior experience as a DevOps Engineer.
- Expertise in CI/CD pipeline maintenance and enhancement specifically Jenkins based pipelines.
- Working experience with engineering tools like git, git work flow, bitbucket, JIRA etc
- Hands-on experience deploying and managing infrastructure with CloudFormation/Terraform
- Experience managing AWS infrastructure
- Hands on experience of Linux administration.
- Basic understanding of Kubernetes/Docker orchestration
- Works closely with engineering team for day to day activities
- Manges existing infrastructure/Pipelines/Engineering tools (On Prem or AWS) for engineering team (Build servers/Jenkin nodes etc.)
- Works with engineering team for new config required for infra like replicating the setups, adding new resources etc.
- Works closely with engineering team for improving existing pipelines for build .
- Troubleshoots problems across infrastructure/services
Experience:
- Min 5-7 year experience
Location
- Remotely, anywhere in India
Timings:
- 40 hours a week (11 AM to 7 PM).
Position:
- Full time/Direct
- We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
- We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.
Requirements:
● Knowledge of building micro-services.
● Experience in managing cloud infrastructure with disaster recovery and security in
mind (AWS, GCP, Azure).
● Experience with High Availability clusters setup.
● Experience in creating alerting and monitoring strategies.
● Strong debugging skills.
● Experience with 0 downtime Continuous Delivery setup (Jenkins, AWS Code
Deploy, Team City, Go CD etc).
● Experience with Infrastructure as Code & Automation tools (Bash, Ansible,
Puppet, Chef, Terraform etc).
● Master of *nix systems, including working with docker, process & network
monitoring tools.
● Knowledge of monitoring tools like New Relic, App Dynamics etc.
● Experience with Messaging systems (RMQ, Kafka etc. ).
● Knowledge of DevOps Intelligence.
● Experience in setting up & driving DevOps initiatives in side the org Excellen.
● Good team player.
● Good to have experience in Kubernetes cluster management.