knowledge of EC2, RDS and S3.
● Good command of Linux environment
● Experience with tools such as Docker, Kubernetes, Redis, NodeJS and Nginx
Server configurations and deployment, Kafka, Elasticsearch, Ansible, Terraform,
etc
● Bonus: AWS certification is a plus
● Bonus: Basic understanding of database queries for relational databases such as
MySQL.
● Bonus: Experience with CI servers such as Jenkins, Travis or similar types
● Bonus: Demonstrated programming capability in a high-level programming
language such as Python, Go, or similar
● Develop, maintain and administer tools which will automate operational activities
and improve engineering productivity
● Automate continuous delivery and on-demand capacity management solutions
● Developing configuration and infrastructure solutions for internal deployments
● Troubleshooting, diagnosing and fixing software issues
● Updating, tracking and resolving technical issues
● Suggesting architecture improvements, recommending process improvements
● Evaluate new technology options and vendor products. Ensuring critical system
security through the use of best in class security solutions
● Technical experience or in a similar role supporting large scale production
distributed systems
● Must understand overall system architecture , improve design and implement new
processes.

About makeO
About
Toothsi is a company that offers at-home smile makeover services. The company was established in 2018. The company has brilliantly expanded its operations throughout India, and as a result, it is responsible for more than one million happy customers.
Together with our more than one hundred in-house dental specialists, our carefully picked in-house orthodontists each have more than ten years of experience working in the aligner industry. This ensures that when customers use Toothsi, customer is selecting the finest option available. Toothsi is a group of people that are well qualified in their fields and have years of experience. We collaborate with orthodontists that have worked in the field for more than ten years and developed more than one hundred thousand smiles, as well as planners, world-class technicians, and customer service specialists.
A better life starts with a beautiful smile with ToothSi
Connect with the team
Similar jobs
Hi,
We are looking for candidate with experience in DevSecOps.
Please find below JD for your reference.
Responsibilities:
Execute shell scripts for seamless automation and system management.
Implement infrastructure as code using Terraform for AWS, Kubernetes, Helm, kustomize, and kubectl.
Oversee AWS security groups, VPC configurations, and utilize Aviatrix for efficient network orchestration.
Contribute to Opentelemetry Collector for enhanced observability.
Implement microsegmentation using AWS native resources and Aviatrix for commercial routes.
Enforce policies through Open Policy Agent (OPA) integration.
Develop and maintain comprehensive runbooks for standard operating procedures.
Utilize packet tracing for network analysis and security optimization.
Apply OWASP tools and practices for robust web application security.
Integrate container vulnerability scanning tools seamlessly within CI/CD pipelines.
Define security requirements for source code repositories, binary repositories, and secrets managers in CI/CD pipelines.
Collaborate with software and platform engineers to infuse security principles into DevOps teams.
Regularly monitor and report project status to the management team.
Qualifications:
Proficient in shell scripting and automation.
Strong command of Terraform, AWS, Kubernetes, Helm, kustomize, and kubectl.
Deep understanding of AWS security practices, VPC configurations, and Aviatrix.
Familiarity with Opentelemetry for observability and OPA for policy enforcement.
Experience in packet tracing for network analysis.
Practical application of OWASP tools and web application security.
Integration of container vulnerability scanning tools within CI/CD pipelines.
Proven ability to define security requirements for source code repositories, binary repositories, and secrets managers in CI/CD pipelines.
Collaboration expertise with DevOps teams for security integration.
Regular monitoring and reporting capabilities.
Site Reliability Engineering experience.
Hands-on proficiency with source code management tools, especially Git.
Cloud platform expertise (AWS, Azure, or GCP) with hands-on experience in deploying and managing applications.
Please send across your updated profile.
Job description
Problem Statement-Solution
Only 10% of India speaks English and 90% speak over 25 languages and 1000s of dialects. The internet has largely been in English. A good part of India is now getting internet connectivity thanks to cheap smartphones and Jio. The non-English speaking internet users will balloon to about 600 million users out of the total 750 million internet users in India by 2020. This will make the vernacular segment one of the largest segments in the world - almost 2x the size of the US population. The vernacular segment has very few products that they can use on the internet.
One large human need is that of sharing thoughts and connecting with people of the same community on the basis of language and common interests. Twitter serves this need globally but the experience is mostly in English. There’s a large unaddressed need for these vernacular users to express themselves in their mother tongue and connect with others from their community. Koo is a solution to this problem.
About Koo
Koo was founded in March 2020, as a micro-blogging platform in both Indian languages and English, which gives a voice to the millions of Indians who communicate in Indian languages.
Currently available in Assamese, Bengali, English, Hindi, Kannada, Marathi, Tamil and Telugu, Koo enables people from across India to express themselves online in their mother tongues. In a country where under 10% of the population speaks English as a native language, Koo meets the need for a social media platform that can deliver an immersive language experience to an Indian user, thereby enabling them to connect and interact with each other. The recently introduced ‘Talk to Type’ enables users to leverage the voice assistant to share their thoughts without having to type. In August 2021, Koo crossed 10 million downloads, in just 16 months of launch.
Since June 2021, Koo is available in Nigeria.
Founding Team
Koo is founded by veteran internet entrepreneurs - Aprameya Radhakrishna (CEO, Taxiforsure) and Mayank Bidawatka (Co-founder, Goodbox & Coreteam, redBus).
Technology Team & Culture
The technology team comprises sharp coders, technology geeks and guys who have been entrepreneurs or are entrepreneurial and extremely passionate towards technology. Talent is coming from the likes of Google, Walmart, Redbus, Dailyhunt. Anyone being part of a technology team will have a lot to learn from their peers and mentors. Download our android app and take a look at what we’ve built. Technology stack compromises of a wide variety of cutting-edge technologies like Kotlin, Java 15, Reactive Programming, MongoDB, Cassandra, Kubernetes, AWS, NodeJS, Python, ReactJS, Redis, Aerospike, ML, Deep learning etc. We believe in giving a lot of independence and autonomy to ownership-driven individuals.
Technology skill sets required for a matching profile
- Experience between 3 to 7 years in a devops role with mandatory one or more stints at a fast paced startup.
- Mandatory experience with containers, kubernetes (EKS) (setting up from scratch), istio and microservices.
- Sound knowledge of technologies like Terraform, Automation Scripts, Cron jobs etc. Must have worked with goals of putting infra as code like setting up new environments with code.
- Knowledge of industry standards around monitoring, alerting, self healing, high availability, auto scaling etc.
- Exhaustive experience with various cloud technologies (especially on AWS) like SQS, SNS, Elastic Search, Elastic Cache, Elastic Transcoder, VPC, Subnets, Security groups etc.
- Must have set up stable CI-CD pipelines with capabilities to do zero downtime deployments on any one of rolling updates, blue green or canary modes.
- Experience with VPN and LDAP solutions for securely login to infrastructure and proving SSO. 8. Master of deploying and troubleshooting all layers of application from network, frontend, backend and databases (MongoDB, Redis, Postgres, Cassandra, ElasticSearch, Aerospike etc).
Projects you'll be working on:
- We're focused on enhancing our product for our clients and their users, as well as streamlining operations and improving our technical foundation.
- Writing scripts for procurement, configuration and deployment of instances (infrastructure automation) on GCP
- Managing Kubernetes cluster
- Manage product and services like VPC, Elasticsearch, cloud functions, rabbitMQ, redis servers, postgres infrastructure, app engine, etc.
- Supporting developers in setting up infrastructure for services
- Manage and improve microservices infrastructure
- Managing high availability, low latency applications
- Focus on security best practices to ensure assist in security and compliance activities
Requirements
- Minimum 3 years experience as DevOps
- Minimum 1 years' experience with Kubernetes Cluster (Infrastructure as code, maintaining and scalability).
- BASH expertise, node or python professional programming experience
- Experience with setting up, configuring and using Jenkins or any CI tools, building CI/CD pipeline
- Experience setting microservices architecture
- Experience with package management and deployments
- Thorough understanding of networking.
- Understanding of all common services and protocols
- Experience in web server configuration, monitoring, network design and high availability
- Thorough understanding of DNS, VPN, SSL
Technologies you'll work with:
- GKE, Prometheus, Grafana, Stackdriver
- ArgoCD and GitHub Actions
- NodeJS Backend
- Postgres, ElasticSearch, Redis, RabbitMQ
- Whatever else you decide - we're constantly re-evaluating our stack and tools
- Having prior experience with the technologies is a plus, but not mandatory for skilled candidates.
Benefits
- Remote Option - You can work from location of your choice :)
- Reimbursement of Home Office Setup
- Competitive Salary
- Friendly atmosphere
- Flexible paid vacation policy
The brand is associated with some of the major icons across categories and tie-ups with industries covering fashion, sports, and music, of course. The founders are Marketing grads, with vast experience in the consumer lifestyle products and other major brands. With their vigorous efforts toward quality and marketing, they have been able to strike a chord with major E-commerce brands and even consumers.
What you will do:
- Defining and documenting best practices and strategies regarding application deployment and infrastructure maintenance
- Providing guidance, thought leadership and mentorship to development teams to build cloud competencies
- Ensuring application performance, uptime, and scale, maintaining high standards of code quality and thoughtful design
- Managing cloud environments in accordance with company security guidelines
- Developing and implementing technical efforts to design, build and deploy AWS applications at the direction of lead architects, including large-scale data processing, computationally intensive statistical modeling and advanced analytics
- Participating in all aspects of the software development life cycle for AWS solutions, including planning, requirements, development, testing, and quality assurance
- Troubleshooting incidents, identifying root cause, fixing and documenting problems and implementing preventive measures
- Educating teams on the implementation of new cloud-based initiatives, providing associated training as required
Desired Candidate Profile
What you need to have:- Bachelor’s degree in computer science, information technology
- 2+ years of experience as architect, designing, developing, and implementing cloud solutions on AWS platforms
- Experience in several of the following areas: database architecture, ETL, business intelligence, big data, machine learning, advanced analytic
- Proven ability to collaborate with multi-disciplinary teams of business analysts, developers, data scientists and subject matter experts
- Self-motivation with the ability to drive features to delivery
- Strong analytical and problem solving skills
- Excellent oral and written communication skills
- Good logical sense, strong technical skills and the ability to learn new technologies quickly
- AWS certifications are a plus
- Knowledge of web services, API, REST, and RPC
Responsibilities
- Designing and building infrastructure to support AWS, Azure, and GCP-based Cloud services and infrastructure.
- Creating and utilizing tools to monitor our applications and services in the cloud including system health indicators, trend identification, and anomaly detection.
- Working with development teams to help engineer scalable, reliable, and resilient software running in the cloud.
- Participating in on-call escalation to troubleshoot customer-facing issues
- Analyzing and monitoring performance bottlenecks and key metrics to optimize software and system performance.
- Providing analytics and forecasts for cloud capacity, troubleshooting analysis, and uptime.
Skills
- Should have strong experience of a couple of years, in leading DevOps team and planning, defining DevOps roadmap and executing as per the same along with the team
- Familiarity with AWS cloud and JSON templates, Python, AWS Cloud formation templates
- Designing solutions using one or more AWS features, tools, and technologies such as EC2, EBS, Glacier, S3, ELB, CloudFormation, Lambada, CloudWatch, VPC, RDS, Direct Connect, AWS CLI, REST API
- Design and implement system architecture with AWS cloud - Develop automation scripts, ARM templates, Ansible, Chef, Python, Powershell Knowledge of AWS services and cloud design patterns- Knowledge on Cloud fundamentals like autoscaling, serverless
- Have experience with DevOps and Infrastructure as Code: AWS environment and application automation utilizing CloudFormation and third-party tools. CI/CD pipeline setup utilizing
- CI experience with the following is a must: Jenkins, Bitbucket/GIT, Nexus or Artifactory, SonarQube, WireMock or other mocking solution
- Expert knowledge on Windows/Linux OS/Mac with at least 5-6 years of system administration experience
- Should have strong skills in using JIRA build tool
- Should have knowledge in managing the CI/CD pipeline on public cloud deployments using AWS
- Should have strong skills in using tools like Jenkins, Docker, Kubernetes (AWS EKS, Azure AKS), and Cloudformation.
- Experience in monitoring tools like Pingdom, Nagios, etc.
- Experience in reverse proxy services like Nginx and Apache
- Desirable experience in Bitbucket with version control tools like GIT/SVN
- Experience of manual/automated testing desired application deployments
- Experience in database technologies such as PostgreSQL, MySQL
- Knowledge of helm and terraform
At Karza technologies, we take pride in building one of the most comprehensive digital onboarding & due-diligence platforms by profiling millions of entities and trillions of associations amongst them using data collated from more than 700 publicly available government sources. Primarily in the B2B Fintech Enterprise space, we are headquartered in Mumbai in Lower Parel with 100+ strong workforce. We are truly furthering the cause of Digital India by providing the entire BFSI ecosystem with tech products and services that aid onboarding customers, automating processes and mitigating risks seamlessly, in real-time and at fraction of the current cost.
A few recognitions:
- Recognized as Top25 startups in India to work with 2019 by LinkedIn
- Winner of HDFC Bank's Digital Innovation Summit 2020
- Super Winners (Won every category) at Tecnoviti 2020 by Banking Frontiers
- Winner of Amazon AI Award 2019 for Fintech
- Winner of FinTech Spot Pitches at Fintegrate Zone 2018 held at BSE
- Winner of FinShare 2018 challenge held by ShareKhan
- Only startup in Yes Bank Global Fintech Accelerator to win the account during the Cohort
- 2nd place Citi India FinTech Challenge 2018 by Citibank
- Top 3 in Viacom18's Startup Engagement Programme VStEP
What your average day would look like:
- Deploy and maintain mission-critical information extraction, analysis, and management systems
- Manage low cost, scalable streaming data pipelines
- Provide direct and responsive support for urgent production issues
- Contribute ideas towards secure and reliable Cloud architecture
- Use open source technologies and tools to accomplish specific use cases encountered within the project
- Use coding languages or scripting methodologies to solve automation problems
- Collaborate with others on the project to brainstorm about the best way to tackle a complex infrastructure, security, or deployment problem
- Identify processes and practices to streamline development & deployment to minimize downtime and maximize turnaround time
What you need to work with us:
- Proficiency in at least one of the general-purpose programming languages like Python, Java, etc.
- Experience in managing the IAAS and PAAS components on popular public Cloud Service Providers like AWS, Azure, GCP etc.
- Proficiency in Unix Operating systems and comfortable with Networking concepts
- Experience with developing/deploying a scalable system
- Experience with the Distributed Database & Message Queues (like Cassandra, ElasticSearch, MongoDB, Kafka, etc.)
- Experience in managing Hadoop clusters
- Understanding of containers and have managed them in production using container orchestration services.
- Solid understanding of data structures and algorithms.
- Applied exposure to continuous delivery pipelines (CI/CD).
- Keen interest and proven track record in automation and cost optimization.
Experience:
- 1-4 years of relevant experience
- BE in Computer Science / Information Technology
Job Location: Jaipur
Experience Required: Minimum 3 years
About the role:
As a DevOps Engineer for Punchh, you will be working with our developers, SRE, and DevOps teams implementing our next generation infrastructure. We are looking for a self-motivated, responsible, team player who love designing systems that scale. Punchh provides a rich engineering environment where you can be creative, learn new technologies, solve engineering problems, all while delivering business objectives. The DevOps culture here is one with immense trust and responsibility. You will be given the opportunity to make an impact as there are no silos here.
Responsibilities:
- Deliver SLA and business objectives through whole lifecycle design of services through inception to implementation.
- Ensuring availability, performance, security, and scalability of AWS production systems
- Scale our systems and services through continuous integration, infrastructure as code, and gradual refactoring in an agile environment.
- Maintain services once a project is live by monitoring and measuring availability, latency, and overall system and application health.
- Write and maintain software that runs the infrastructure that powers the Loyalty and Data platform for some of the world’s largest brands.
- 24x7 in shifts on call for Level 2 and higher escalations
- Respond to incidents and write blameless RCA’s/postmortems
- Implement and practice proper security controls and processes
- Providing recommendations for architecture and process improvements.
- Definition and deployment of systems for metrics, logging, and monitoring on platform.
Must have:
- Minimum 3 Years of Experience in DevOps.
- BS degree in Computer Science, Mathematics, Engineering, or equivalent practical experience.
- Strong inter-personal skills.
- Must have experience in CI/CD tooling such as Jenkins, CircleCI, TravisCI
- Must have experience in Docker, Kubernetes, Amazon ECS or Mesos
- Experience in code development in at least one high-level programming language fromthis list: python, ruby, golang, groovy
- Proficient in shell scripting, and most importantly, know when to stop scripting and start developing.
- Experience in creation of highly automated infrastructures with any Configuration Management tools like: Terraform, Cloudformation or Ansible.
- In-depth knowledge of the Linux operating system and administration.
- Production experience with a major cloud provider such Amazon AWS.
- Knowledge of web server technologies such as Nginx or Apache.
- Knowledge of Redis, Memcache, or one of the many in-memory data stores.
- Experience with various load balancing technologies such as Amazon ALB/ELB, HA Proxy, F5.
- Comfortable with large-scale, highly-available distributed systems.
Good to have:
- Understanding of Web Standards (REST, SOAP APIs, OWASP, HTTP, TLS)
- Production experience with Hashicorp products such as Vault or Consul
- Expertise in designing, analyzing troubleshooting large-scale distributed systems.
- Experience in an PCI environment
- Experience with Big Data distributions from Cloudera, MapR, or Hortonworks
- Experience maintaining and scaling database applications
- Knowledge of fundamental systems engineering principles such as CAP Theorem, Concurrency Control, etc.
- Understanding of the network fundamentals: OSI, TCI/IP, topologies, etc.
- Understanding of Auditing of Infrastructure and help org. to control Infrastructure costs.
- Experience in Kafka, RabbitMQ or any messaging bus.
Requirements
- Design, write and build tools to improve the reliability, latency, availability and scalability of HealthifyMe application.
- Communicate, collaborate and work effectively across distributed teams in a global environment
- Optimize performance and solve issues across the entire stack: hardware, software, application, and network.
- Experienced in building infrastructure with terraform / cloudformation or equivalent.
- Experience with ansible or equivalent is beneficial
- Ability to use a wide variety of Open Source Tools
- Experience with AWS is a must.
- Minimum 5 years of running services in a large scale environment.
- Expert level understanding of Linux servers, specifically RHEL/CentOS.
- Practical, proven knowledge of shell scripting and at least one higher-level language (eg. Python, Ruby, GoLang).
- Experience with source code and binary repositories, build tools, and CI/CD (Git, Artifactory, Jenkins, etc)
- Demonstrable knowledge of TCP/IP, HTTP, web application security, and experience supporting multi-tier web application architectures.
Look forward to
- Working with a world-class team.
- Fun & work at the same place with an amazing work culture and flexible timings.
- Get ready to transform yourself into a health junkie
Join HealthifyMe and make history!
PRAXINFO Hiring DevOps Engineer.
Position : DevOps Engineer
Job Location : C.G.Road, Ahmedabad
EXP : 1-3 Years
Salary : 40K - 50K
Required skills:
⦿ Good understanding of cloud infrastructure (AWS, GCP etc)
⦿ Hands on with Docker, Kubernetes or ECS
⦿ Ideally strong Linux background (RHCSA , RHCE)
⦿ Good understanding of monitoring systems (Nagios etc), Logging solutions (Elastisearch etc)
⦿ Microservice architectures
⦿ Experience with distributed systems and highly scalable systems
⦿ Demonstrated history in automating operations processes via services and tools ( Puppet, Ansible etc)
⦿ Systematic problem-solving approach coupled with a strong sense of ownership and drive.
If anyone is interested than share your resume at hiring at praxinfo dot com!
#linux #devops #engineer #kubernetes #docker #containerization #python #shellscripting #git #jenkins #maven #ant #aws #RHCE #puppet #ansible








