48+ Chef Jobs in India
Apply to 48+ Chef Jobs on CutShort.io. Find your next job, effortlessly. Browse Chef Jobs and apply today!
Job Description:
We are seeking a motivated DevOps intern to join our team. The intern will be responsible for deploying and maintaining applications in AWS and Azure cloud environments, as well as on client local machines when required. The intern will troubleshoot any deployment issues and ensure the high availability of the applications.
Responsibilities:
- Deploy and maintain applications in AWS and Azure cloud environments
- Deploy applications on client local machines when needed
- Troubleshoot deployment issues and ensure high availability of applications
- Collaborate with development teams to improve deployment processes
- Monitor system performance and implement optimizations
- Implement and maintain CI/CD pipelines
- Assist in implementing security best practices
Requirements:
- Currently pursuing a degree in Computer Science, Engineering, or related field
- Knowledge of cloud computing platforms (AWS, Azure)
- Familiarity with containerization technologies (Docker, Kubernetes)
- Basic understanding of networking principles
- Strong problem-solving skills
- Excellent communication skills
Nice to Have:
- Familiarity with configuration management tools (e.g., Ansible, Chef, Puppet)
- Familiarity with monitoring and logging tools (e.g., Prometheus, ELK stack)
- Understanding of security best practices in cloud environments
Benefits:
- Hands-on experience with cutting-edge technologies.
- Opportunity to work on exciting AI and LLM projects
Bito is a startup that is using AI (ChatGPT, OpenAI, etc) to create game-changing productivity experiences for software developers in their IDE and CLI. Already, over 100,000 developers are using Bito to increase their productivity by 31% and performing more than 1 million AI requests per week.
Our founders have previously started, built, and taken a company public (NASDAQ: PUBM), worth well over $1B. We are looking to take our learnings, learn a lot along with you, and do something more exciting this time. This journey will be incredibly rewarding, and is incredibly difficult!
We are building this company with a fully remote approach, with our main teams for time zone management in the US and in India. The founders happen to be in Silicon Valley and India.
We are hiring a DevOps Engineer to join our team.
Responsibilities:
- Collaborate with the development team to design, develop, and implement Java-based applications
- Perform analysis and provide recommendations for Cloud deployments and identify opportunities for efficiency and cost reduction
- Build and maintain clusters for various technologies such as Aerospike, Elasticsearch, RDS, Hadoop, etc
- Develop and maintain continuous integration (CI) and continuous delivery (CD) frameworks
- Provide architectural design and practical guidance to software development teams to improve resilience, efficiency, performance, and costs
- Evaluate and define/modify configuration management strategies and processes using Ansible
- Collaborate with DevOps engineers to coordinate work efforts and enhance team efficiency
- Take on leadership responsibilities to influence the direction, schedule, and prioritization of the automation effort
Requirements:
- Minimum 4+ years of relevant work experience in a DevOps role
- At least 3+ years of experience in designing and implementing infrastructure as code within the AWS/GCP/Azure ecosystem
- Expert knowledge of any cloud core services, big data managed services, Ansible, Docker, Terraform/CloudFormation, Amazon ECS/Kubernetes, Jenkins, and Nginx
- Expert proficiency in at least two scripting/programming languages such as Bash, Perl, Python, Go, Ruby, etc.
- Mastery in configuration automation tool sets such as Ansible, Chef, etc
- Proficiency with Jira, Confluence, and Git toolset
- Experience with automation tools for monitoring and alerts such as Nagios, Grafana, Graphite, Cloudwatch, New Relic, etc
- Proven ability to manage and prioritize multiple diverse projects simultaneously
What do we offer:
At Bito, we strive to create a supportive and rewarding work environment that enables our employees to thrive. Join a dynamic team at the forefront of generative AI technology.
· Work from anywhere
· Flexible work timings
· Competitive compensation, including stock options
· A chance to work in the exciting generative AI space
· Quarterly team offsite events
Role : Principal Devops Engineer
About the Client
It is a Product base company that has to build a platform using AI and ML technology for their transportation and logiticsThey also have a presence in the global market
Responsibilities and Requirements
• Experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
• Knowledge in Linux/Unix Administration and Python/Shell Scripting
• Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
• Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios
• Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
• Experience in enterprise application development, maintenance and operations
• Knowledge of best practices and IT operations in an always-up, always-available service
• Excellent written and oral communication skills, judgment and decision-making skill
Key Responsibilities:
- Work with the development team to plan, execute and monitor deployments
- Capacity planning for product deployments
- Adopt best practices for deployment and monitoring systems
- Ensure the SLAs for performance, up time are met
- Constantly monitor systems, suggest changes to improve performance and decrease costs.
- Ensure the highest standards of security
Key Competencies (Functional):
- Proficiency in coding in atleast one scripting language - bash, Python, etc
- Has personally managed a fleet of servers (> 15)
- Understand different environments production, deployment and staging
- Worked in micro service / Service oriented architecture systems
- Has worked with automated deployment systems – Ansible / Chef / Puppet.
- Can write MySQL queries
Roles & Responsibilities:
- Bachelor’s degree in Computer Science, Information Technology or a related field
- Experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
- Knowledge in Linux/Unix Administration and Python/Shell Scripting
- Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
- Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
- Experience in enterprise application development, maintenance and operations
- Knowledge of best practices and IT operations in an always-up, always-available service
- Excellent written and oral communication skills, judgment and decision-making skills
DESIRED SKILLS AND EXPERIENCE
Strong analytical and problem-solving skills
Ability to work independently, learn quickly and be proactive
3-5 years overall and at least 1-2 years of hands-on experience in designing and managing DevOps Cloud infrastructure
Experience must include a combination of:
o Experience working with configuration management tools – Ansible, Chef, Puppet, SaltStack (expertise in at least one tool is a must)
o Ability to write and maintain code in at least one scripting language (Python preferred)
o Practical knowledge of shell scripting
o Cloud knowledge – AWS, VMware vSphere o Good understanding and familiarity with Linux
o Networking knowledge – Firewalls, VPNs, Load Balancers
o Web/Application servers, Nginx, JVM environments
o Virtualization and containers - Xen, KVM, Qemu, Docker, Kubernetes, etc.
o Familiarity with logging systems - Logstash, Elasticsearch, Kibana
o Git, Jenkins, Jira
Key Sills Required for Lead DevOps Engineer
Containerization Technologies
Docker, Kubernetes, OpenShift
Cloud Technologies
AWS/Azure, GCP
CI/CD Pipeline Tools
Jenkins, Azure Devops
Configuration Management Tools
Ansible, Chef,
SCM Tools
Git, GitHub, Bitbucket
Monitoring Tools
New Relic, Nagios, Prometheus
Cloud Infra Automation
Terraform
Scripting Languages
Python, Shell, Groovy
· Ability to decide the Architecture for the project and tools as per the availability
· Sound knowledge required in the deployment strategies and able to define the timelines
· Team handling skills are a must
· Debugging skills are an advantage
· Good to have knowledge of Databases like Mysql, Postgresql
It is advantageous to be familiar with Kafka. RabbitMQ
· Good to have knowledge of Web servers to deploy web applications
· Good to have knowledge of Code quality checking tools like SonarQube and Vulnerability scanning
· Advantage to having experience in DevSecOps
Note: Tools mentioned in bold are a must and others are added advantage
Requirement
- 1 to 7 years of experience with relative experience in managing development operations
- Hands-on experience with AWS
- Thorough knowledge of setting up release pipelines, and managing multiple environments like Beta, Staging, UAT, and Production
- Thorough knowledge of best cloud practices and architecture
- Hands-on with benchmarking and performance monitoring
- Identifying various bottlenecks and taking pre-emptive measures to avoid downtime
- Hands-on knowledge with at least one toolset Chef/Puppet/Ansible
- Hands-on with CloudFormation / Terraform or other Infrastructure as code is a plus.
- Thorough experience with Shell Scripting and should not know to shy away from learning new technologies or programming languages
- Experience with other cloud providers like Azure and GCP is a plus
- Should be open to R&D for creative ways to improve performance while keeping costs low
What do we want the person to do?
- Manage, Monitor and Provision Infrastructure - Majorly on AWS
- Will be responsible for maintaining 100% uptime on production servers (Site Reliability)
- Setting up a release pipeline for current releases. Automating releases for Beta, Staging & Production
- Maintaining near-production replica environments on Beta and Staging
- Automating Releases and Versioning of Static Assets (Experience with Chef/Puppet/Ansible)
- Should have hands-on experience with Build Tools like Jenkins, GitHub Actions, AWS CodeBuild etc
- Identify performance gaps and ways to fix them.
- Weekly meetings with Engineering Team to discuss the changes/upgrades. Can be related to code issues/architecture bottlenecks.
- Creative Ways of Reducing Costs of Cloud Computing
- Convert Infrastructure Deployment / Provision to Infrastructure as Code for reusability and scaling.
A Consumer Food brand into Artisan Bakery
About Company
Our client is one of the strongest Consumer brands in the Bakery category, having a 25000 sq ft state-of-the-art centralized manufacturing facility with European equipment near Ahmedabad, Gujarat. The founding team consists of a ‘Master Baker’ from Le Cordon Bleu, Paris, one of the finest culinary institutes in the world and an IIM-A alumni with a McKinsey background.
Job Description:
- Product Development:
- Includes new product development, product improvement, process improvement and optimization.
- Formulate bakery recipes for products such as cakes, cookies and dough in general that will deliver optimal flavor systems which comply with cost, product concept design, and regulatory and sensory requirements.
- Create recipes that are consistent with the company philosophy of “honest dough” that optimize the taste experience and achieve preference in testing with target consumers
- Develop and implements the technical plans for multiple food development projects simultaneously
- Project work includes all phases of New Product Development from initial prototype development, to pilot plant scale, to full-scale commercialization.
- Design and executes experiments, analyze and interprets data to make sound technical recommendations for specific projects
- Prepare prototype food products using the research lab, pilot lab, manufacturing facilities and equipment as required.
- Provide solutions to a wide range of challenges by applying technical knowledge and experience
- Collaboration with cross-functional teams and technical peers, throughout the development of new products and processes
- Develop and implements cost savings ideas through formula and process optimization.
- Projects like improving certain product attributes like nutritional value, shelf- life etc
- Training the workforce on the new products and processes, making product barcode labels and supervising the production of the pilot batch.
- Quality Assurance:
- Includes all areas and processes of the plant like production, packaging, raw material inspection etc.
- Ensuring regulatory compliance
- Conducting regular internal audits
- Raw Material Quality Control:
- Supervision of all raw material quality checks and inspections including food ingredients, water, packaging material, cleaning supplies etc.
- Ensuring proper documentation is maintained
- Finished Product Testing:
- Sending products for testing including nutritional information, a shelf- life, quality etc. and documenting the same
- Suggesting any health/nutritional claims or taking corrective action (if required) based on the tests
- Miscellaneous:
- Other miscellaneous tasks like creating content for product packaging, description etc.
- Coordination of external visits, audits, training, workshops etc.
Qualification, Experience, and Skills required:
- 2- 4 years experience in a food manufacturing environment and QC Lab
- B.Sc. or equivalent degree in Food discipline.
- Previous Project Management experience.
- Strong leader with good interpersonal and communications skills with the ability to communicate across departments/suppliers and external customers.
- Strong organizational, and project management skills to assist with critical path deadlines and project priorities.
- Working knowledge of the factory and raw materials currently used on site.
- Can effectively time manage - to structure the team's day efficiently with regards to differing priorities/workloads as well as working to tight deadlines.
- Hardworking and passionate with an eye for detail.
- Ability to problem solve
Interfaces with other processes and/or business functions to ensure they can leverage the
benefits provided by the AWS Platform process
Responsible for managing the configuration of all IaaS assets across the platforms
Hands-on python experience
Manages the entire AWS platform(Python, Flask, RESTAPI, serverless framework) and
recommend those that best meet the organization's requirements
Has a good understanding of the various AWS services, particularly: S3, Athena, Python code,
Glue, Lambda, Cloud Formation, and other AWS serverless resources.
AWS Certification is Plus
Knowledge of best practices for IT operations in an always-on, always-available service model
Responsible for the execution of the process controls, ensuring that staff comply with process
and data standards
Qualifications
Bachelor’s degree in Computer Science, Business Information Systems or relevant experience and
accomplishments
3 to 6 years of experience in the IT field
AWS Python developer
AWS, Serverless/Lambda, Middleware.
Strong AWS skills including Data Pipeline, S3, RDS, Redshift with familiarity with other components
like - Lambda, Glue, Step functions, CloudWatch
Must have created REST API with AWS Lambda.
Python relevant exp 3 years
Good to have Experience working on projects and problem solving with large scale multivendor
teams.
Good to have knowledge on Agile Development
Good knowledge on SDLC.
Hands on AWS Databases, (RDS, etc)
Good to have Unit testing exp.
Good to have CICD working knowledge.
Decent communication, as there will be client interaction and documentation.
Education (degree): Bachelor’s degree in Computer Science, Business Information Systems or relevant
experience and accomplishments
Years of Experience: 3-6 years
Technical Skills
Linux/Unix system administration
Continuous Integration/Continuous Delivery tools like Jenkins
Cloud provisioning and management – Azure, AWS, GCP
Ansible, Chef, or Puppet
Python, PowerShell & BASH
Job Details
JOB TITLE/JOB CODE: AWS Python Develop[er, III-Sr. Analyst
RC: TBD
PREFERRED LOCATION: HYDERABAD, IND
POSITION REPORTS TO: Manager USI T&I Cloud Managed Platform
CAREER LEVEL: 3
Work Location:
Hyderabad
-
Job Title - DevOps Engineer
-
Reports Into - Lead DevOps Engineer
-
Location - India
A Little Bit about Kwalee….
Kwalee is one of the world’s leading multiplatform game developers and publishers, with well over 900 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Airport Security and Makeover Studio 3D. We also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope, Die by the Blade and Scathe.
What’s In It For You?
-
Hybrid working - 3 days in the office, 2 days remote/ WFH is the norm
-
Flexible working hours - we trust you to choose how and when you work best
-
Profit sharing scheme - we win, you win
-
Private medical cover - delivered through BUPA
-
Life Assurance - for long term peace of mind
-
On site gym - take care of yourself
-
Relocation support - available
-
Quarterly Team Building days - we’ve done Paintballing, Go Karting & even Robot Wars
-
Pitch and make your own games on https://www.kwalee.com/blog/inside-kwalee/what-are-creative-wednesdays/">Creative Wednesdays!
Are You Up To The Challenge?
As a DevOps Engineer you have a passion for automation, security and building reliable expandable systems. You develop scripts and tools to automate deployment tasks and monitor critical aspects of the operation, resolve engineering problems and incidents. Collaborate with architects and developers to help create platforms for the future.
Your Team Mates
The DevOps team works closely with game developers, front-end and back-end server developers making, updating and monitoring application stacks in the cloud.Each team member has specific responsibilities with their own projects to manage and bring their own ideas to how the projects should work. Everyone strives for the most efficient, secure and automated delivery of application code and supporting infrastructure.
What Does The Job Actually Involve?
-
Find ways to automate tasks and monitoring systems to continuously improve our systems.
-
Develop scripts and tools to make our infrastructure resilient and efficient.
-
Understand our applications and services and keep them running smoothly.
Your Hard Skills
-
Minimum 1 years of experience on a dev ops engineering role
-
Deep experience with Linux and Unix systems
-
Networking basics knowledge (named, nginx, etc)
-
Some coding experience (Python, Ruby, Perl, etc.)
-
Experience with common automation tools (Ex. Chef, Terraform, etc)
-
AWS experience is a plus
-
A creative mindset motivated by challenges and constantly striving for the best
Your Soft Skills
Kwalee has grown fast in recent years but we’re very much a family of colleagues. We welcome people of all ages, races, colours, beliefs, sexual orientations, genders and circumstances, and all we ask is that you collaborate, work hard, ask questions and have fun with your team and colleagues.
We don’t like egos or arrogance and we love playing games and celebrating success together. If that sounds like you, then please apply.
A Little More About Kwalee
Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts.
Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle.
We have an amazing team of experts collaborating daily between our studios in Leamington Spa, Lisbon, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, Cyprus, the Philippines and many more places around the world. We’ve recently acquired our first external studio, TicTales, which is based in France.
We have a truly global team making games for a global audience, and it’s paying off: - Kwalee has been voted the Best Large Studio and Best Leadership Team at the TIGA Awards (Independent Game Developers’ Association) and our games have been downloaded in every country on earth - including Antarctica!
at LogiNext
Only apply on this link - https://loginext.hire.trakstar.com/jobs/fk025uh?source=" target="_blank">https://loginext.hire.trakstar.com/jobs/fk025uh?source=
LogiNext is looking for a technically savvy and passionate Associate Vice President - Product Engineering - DevOps or Senior Database Administrator to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust infrastructure.
You have hands-on experience in building secure, high-performing and scalable infrastructure. You have experience to automate and streamline the development operations and processes. You are a master in troubleshooting and resolving issues in dev, staging and production environments.
Responsibilities:
- Design and implement scalable infrastructure for delivering and running web, mobile and big data applications on cloud
- Scale and optimise a variety of SQL and NoSQL databases (especially MongoDB), web servers, application frameworks, caches, and distributed messaging systems
- Automate the deployment and configuration of the virtualized infrastructure and the entire software stack
- Plan, implement and maintain robust backup and restoration policies ensuring low RTO and RPO
- Support several Linux servers running our SaaS platform stack on AWS, Azure, IBM Cloud, Ali Cloud
- Define and build processes to identify performance bottlenecks and scaling pitfalls
- Manage robust monitoring and alerting infrastructure
- Explore new tools to improve development operations to automate daily tasks
- Ensure High Availability and Auto-failover with minimum or no manual interventions
Requirements:
- Bachelor’s degree in Computer Science, Information Technology or a related field
- 11 to 14 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
- Strong background in Linux/Unix Administration and Python/Shell Scripting
- Extensive experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
- Experience in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios
- Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
- Experience in query analysis, peformance tuning, database redesigning,
- Experience in enterprise application development, maintenance and operations
- Knowledge of best practices and IT operations in an always-up, always-available service
- Excellent written and oral communication skills, judgment and decision-making skills.
- Excellent leadership skill.
upraisal
- Work towards improving the following 4 verticals - scalability, availability, security, and cost, for company's workflows and products.
- Help in provisioning, managing, optimizing cloud infrastructure in AWS (IAM, EC2, RDS, CloudFront, S3, ECS, Lambda, ELK etc.)
- Work with the development teams to design scalable, robust systems using cloud architecture for both 0-to-1 and 1-to-100 products.
- Drive technical initiatives and architectural service improvements.
- Be able to predict problems and implement solutions that detect and prevent outages.
- Mentor/manage a team of engineers.
- Design solutions with failure scenarios in mind to ensure reliability.
- Document rigorously to keep track of all changes/upgrades to the infrastructure and as well share knowledge with the rest of the team
- Identify vulnerabilities during development with actionable information to empower developers to remediate vulnerabilities
- Automate the build and testing processes to consistently integrate code
- Manage changes to documents, software, images, large web sites, and other collections of code, configuration, and metadata among disparate teams
Rapidly growing fintech SaaS firm that propels business grow
What is the role?
As a DevOps Engineer, you are responsible for setting up and maintaining the GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, and Cloud security.
Key Responsibilities
- Set up, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi-hosting cloud environments.
- Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
- Work on Docker images and maintain Kubernetes clusters.
- Develop and maintain the automation scripts using Ansible or other available tools.
- Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
- Work on Cloud security tools to keep applications secured.
- Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve a successful implementation of integrated solutions within the portfolio.
- Have the necessary technical and professional expertise.
What are we looking for?
- Minimum 5-12 years of experience in the IT industry.
- Expertise in implementing and managing DevOps CI/CD pipeline.
- Experience in DevOps automation tools. Well versed with DevOps Frameworks, and Agile.
- Working knowledge of scripting using Shell, Python, Terraform, Ansible, Puppet, or chef.
- Experience and good understanding of any Cloud like AWS, Azure, or Google cloud.
- Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
- Experience working with ticketing tools.
- Middleware technologies knowledge or database knowledge is desirable.
- Experience with Jira is a plus.
What can you look for?
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact, and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being here.
We are
It is a rapidly growing fintech SaaS firm that propels business growth while focusing on human motivation. Backed by Giift and Apis Partners Growth Fund II,offers a suite of three products - Plum, Empuls, and Compass. Works with more than 2000 clients across 10+ countries and over 2.5 million users. Headquartered in Bengaluru, It is a 300+ strong team with four global offices in San Francisco, Dublin, Singapore, New Delhi.
Way forward
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.
at 6sense
The Company:
It’s no surprise that 6sense is named a top workplace year after year — we have industry-leading technology developed and taken to market by a world-class team. 6sense is Top Rated on Glassdoor with a 4.9/5 and our CEO Jason Zintak was recognized as the #1 CEO in the small & medium business category by Glassdoor’s https://www.glassdoor.com/Award/Top-CEOs-at-SMBs-LST_KQ0%2C16.htm">2021 Top CEO Employees Choice Awards.
In 2021, the company was recognized for having the Best Company for Diversity, Best Company for Women, Best CEO, Best Company Culture, Best Company Perks & Benefits and Happiest Employees from the employee feedback platform Comparably. In addition, 6sense has also won several accolades that demonstrate its reputation as an employer of choice including the Glassdoor Best Place to Work (2022), TrustRadius Tech Cares (2021) and Inc. Best Workplaces (2022, 2021, 2020, 2019).
6sense reinvents the way organizations create, manage, and convert pipeline to revenue. The 6sense Revenue AI captures anonymous buying signals, predicts the right accounts to target at the ideal time, and recommends the channels and messages to boost revenue performance. Removing guesswork, friction and wasted sales effort, 6sense empowers sales, marketing, and customer success teams to significantly improve pipeline quality, accelerate sales velocity, increase conversion rates, and grow revenue predictably.
Senior Software Engineer - Infrastructure, Cloud
Responsibilities:
Develop and deploy services to improve the availability, ease of use/management, and visibility of 6sense systems
Building and scaling out our services and infrastructure
Learning and adopting technologies that may aide in solving our challenges
Own our critical underlying systems like AWS, Kubernetes, Mesos, infrastructure deployment, and compute cluster architecture (which services frameworks and engines like Hadoop/Hive/Presto)
Write/review/debug production code, develop documentation and capacity plans, and debug live production problems Contributing back to open-source projects if we need to add or patch functionality
Support the overall Software Engineering team to resolve any issues they encounter
Minimum Qualifications:
5+ years of experience with Linux/Unix system administration and networking fundamentals 3+ years in a Software Engineering role or equivalent experience
4+ years of working with AWS
4+ years of experience working with Kubernetes, Docker.
Strong skills in reading code as well as writing clean, maintainable, and scalable code
Good knowledge of Python
Experience designing, building, and maintaining scalable services and/or service-oriented architecture
Experience with high-availability
Experience with modern configuration management tools (e.g. Ansible/AWX, Chef, Puppet, Pulumi) and idempotency
Bonus Requirements:
Knowledge of standard security practices
Knowledge of the Hadoop ecosystem (e.g. Hadoop, Hive, Presto) including deployment, scaling, and maintenance Experience with operating and maintaining VPN/SSH/ZeroTrust access infrastructure
Experience with CDNs such as CloudFront and Akamai
Good knowledge of Javascript, Java, Golang
Exposure to modern build systems such as Bazel, Buck, or Pants#LI-remote
Every person in every role at 6sense owns a part of defining the future of our industry-leading technology. You’ll join a team where curiosity is prized, no one’s satisfied with the status quo, and everyone’s all-in on the collective good.6sense is a place where difference-makers roll up their sleeves, take risks, act with integrity, and measure successby the value we create for our customers.
We want 6sense to be the best chapter of your career.
Feel part of something
You’ll be part of building tomorrow’s tech, revolutionizing how marketing and sales teams create, manage, and convert pipeline to revenue. And you’ll be seen and appreciated by co-workers who challenge you, cheer you on, and always have your back.
At 6sense, you’ll experience the passion from customers and colleagues alike for our market-leading vision, and you're entrusted with applying your unique talents to help bring that vision to life.
Build a career
As part of a company on a rocketship trajectory, there’s no way around it: You’re going to experience unparalleled career growth. With colleagues as humble and hungry as you are, and a leadership philosophy grounded in trust, transparency, and empowerment, every day is a chance to improve on the one before.
Enjoy access to our Udemy Training Library with 5,000+ courses, give and get recognition from your coworkers, and spend time with our executive team every two weeks in our All Hands gathering to connect, learn and ask leaders about whatever is on your mind.
Enjoy work, and your life
This is a place where you’ll do your best work and inspire others to do theirs — where you’re guaranteed to make real connections, for life, along the way.
We want to help you prioritize health and wellness, today and tomorrow. Take advantage of family medical coverage; a monthly stipend to support your physical, mental, and financial wellness; generous paid parental leave benefits; Plus, we have an open time-off policy, so you can take the time you need.
Set for success
A vision as big as ours only comes to life when we’re all winning together.
We’ll make sure you have the equipment you need to work at home or in one of our offices. And have the right snacks, pens or lighting with our work-from-home expense reimbursement allowance. We also partner with WeWork to make sure that if your choice is a hybrid of home and office, we have you covered in the locations they’re offered.
That’s the commitment we make to every one of our employees. If this sounds like a place where you'll thrive as you take your success to the next level, let’s chat!
Profile Description:
The job holder will work with developers and the IT staff to oversee the code releases, combining an understanding of both engineering and coding. From creating and implementing systems software to analyzing data to improve existing ones, a DevOps Engineer increases productivity in the workplace.
Key Responsibilities:
- Understanding customer requirements and project KPIs
- Implementing various development, testing, automation tools, and IT infrastructure
- Planning the team structure, activities, and involvement in project management activities.
- Managing stakeholders and external interfaces
- Setting up tools and required infrastructure
- Defining and setting development, test, release, update, and support processes for DevOps operation
- Have the technical skill to review, verify, and validate the software code developed in the project.
- Troubleshooting techniques and fixing the code bugs
- Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement and minimizing the wastage
- Encouraging and building automated processes wherever possible
- Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management
- Incidence management and root cause analysis
- Coordination and communication within the team and with customers
- Selecting and deploying appropriate CI/CD tools
- Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
- Mentoring and guiding the team members
- Monitoring and measuring customer experience and KPIs
- Managing periodic reporting on the progress to the management and the customer
We are looking for an excellent experienced person in the Dev-Ops field. Be a part of a vibrant, rapidly growing tech enterprise with a great working environment. As a DevOps Engineer, you will be responsible for managing and building upon the infrastructure that supports our data intelligence platform. You'll also be involved in building tools and establishing processes to empower developers to
deploy and release their code seamlessly.
Responsibilities
The ideal DevOps Engineers possess a solid understanding of system internals and distributed systems.
Understanding accessibility and security compliance (Depending on the specific project)
User authentication and authorization between multiple systems,
servers, and environments
Integration of multiple data sources and databases into one system
Understanding fundamental design principles behind a scalable
application
Configuration management tools (Ansible/Chef/Puppet), Cloud
Service Providers (AWS/DigitalOcean), Docker+Kubernetes ecosystem is a plus.
Should be able to make key decisions for our infrastructure,
networking and security.
Manipulation of shell scripts during migration and DB connection.
Monitor Production Server Health of different parameters (CPU Load, Physical Memory, Swap Memory and Setup Monitoring tool to
Monitor Production Servers Health, Nagios
Created Alerts and configured monitoring of specified metrics to
manage their cloud infrastructure efficiently.
Setup/Managing VPC, Subnets; make connection between different zones; blocking suspicious ip/subnet via ACL.
Creating/Managing AMI/Snapshots/Volumes, Upgrade/downgrade
AWS resources (CPU, Memory, EBS)
The candidate would be Responsible for managing microservices at scale maintain the compute and storage infrastructure for various product teams.
Strong Knowledge about Configuration Management Tools like –
Ansible, Chef, Puppet
Extensively worked with Change tracking tools like JIRA and log
Analysis, Maintaining documents of production server error log's
reports.
Experienced in Troubleshooting, Backup, and Recovery
Excellent Knowledge of Cloud Service Providers like – AWS, Digital
Ocean
Good Knowledge about Docker, Kubernetes eco-system.
Proficient understanding of code versioning tools, such as Git
Must have experience working in an automated environment.
Good knowledge of Amazon Web Service Architects like – Amazon EC2, Amazon S3 (Amazon Glacier), Amazon VPC, Amazon Cloud Watch.
Scheduling jobs using crontab, Create SWAP Memory
Proficient Knowledge about Access Management (IAM)
Must have expertise in Maven, Jenkins, Chef, SVN, GitHub, Tomcat, Linux, etc.
Candidate Should have good knowledge about GCP.
EducationalQualifications
B-Tech-IT/M-Tech -/MBA- IT/ BCA /MCA or any degree in the relevant field
EXPERIENCE: 2-6 yr
DevOps Engineer
Notice Period: 45 days / Immediate Joining
Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure. We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms. Our engagement service is built on the DevOps standard practice and SRE model.
We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. we offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.
Key Qualifications
· 4+ years of experience as a DevOps Engineer with monitoring, troubleshooting, and diagnosing infrastructure systems.
· Experience in implementation of continuous integration and deployment pipelines using Jenkins, JIRA, JFrog, etc
· Strong experience in Linux/Unix administration.
· Experience with automation/configuration management using Puppet, Chef, Ansible, Terraform, or other similar tools.
· Expertise in multiple coding and scripting languages including Shell, Python, and Perl
· Hands-on experience Exposure to modern IT infrastructure (eg. Docker swarm/Mesos/Kubernetes/Openstack)
· Exposure to any of relation database technologies MySQL/Postgres/Oracle or any No-SQL database
· Worked on open-source tools for logging, monitoring, search engine, caching, etc.
· Professional Certificates in AWS or any other cloud is preferable
· Excellent problem solving and troubleshooting skills
· Must have good written and verbal communication skills
Key Responsibilities
Ambitious individuals who can work under their own direction towards agreed targets/goals.
Must be flexible to work on the office timings to accommodate the multi-national client timings.
Will be involved in solution designing from the conceptual stages through development cycle and deployments.
Involve development operations & support internal teams
Improve infrastructure uptime, performance, resilience, reliability through automation
Willing to learn new technologies and work on research-orientated projects
Proven interpersonal skills while contributing to team effort by accomplishing related results as needed.
Scope and deliver solutions with the ability to design solutions independently based on high-level architecture.
Independent thinking, ability to work in a fast-paced environment with creativity and brainstorming
http://www.banyandata.com" target="_blank">www.banyandata.com
Global Cloud Messaging Leader
What you will do:
- Handling Configuration Management, Web Services Architectures, DevOps Implementation, Build & Release Management, Database management, Backups and monitoring
- Logging, metrics and alerting management
- Creating Docker files
- Performing root cause analysis for production errors
What you need to have:
- 12+ years of experience in Software Development/ QA/ Software Deployment with 5+ years of experience in managing high performing teams
- Proficiency in VMware, AWS & cloud applications development, deployment
- Good knowledge in Java, Node.js
- Experience working with RESTful APIs, JSON etc
- Experience with Unit/ Functional automation is a plus
- Experience with MySQL, Mango DB, Redis, Rabbit MQ
- Proficiency in Jenkins. Ansible, Terraform/Chef/Ant
- Proficiency in Linux based Operating Systems
- Proficiency of Cloud Infrastructure like Dockers, Kubernetes
- Strong problem solving and analytical skills
- Good written and oral communication skills
- Sound understanding in areas of Computer Science such as algorithms, data structures, object oriented design, databases
- Proficiency in monitoring and observability
A digital business enablement MNC
Minimum 4 years exp
Skillsets:
- Build automation/CI: Jenkins
- Secure repositories: Artifactory, Nexus
- Build technologies: Maven, Gradle
- Development Languages: Python, Java, C#, Node, Angular, React/Redux
- SCM systems: Git, Github, Bitbucket
- Code Quality: Fisheye, Crucible, SonarQube
- Configuration Management: Packer, Ansible, Puppet, Chef
- Deployment: uDeploy, XLDeploy
- Containerization: Kubernetes, Docker, PCF, OpenShift
- Automation frameworks: Selenium, TestNG, Robot
- Work Management: JAMA, Jira
- Strong problem solving skills, Good verbal and written communication skills
- Good knowledge of Linux environment: RedHat etc.
- Good in shell scripting
- Good to have Cloud Technology : AWS, GCP and Azure
A fast-growing SaaS commerce company based in Bangalore
As DevOps Engineer, you are responsible to setup and maintain GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, Cloud security.
- Setup, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi hosting cloud environments.
- Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
- Working on Docker images and maintaining Kubernetes clusters.
- Develop and maintain the automation scripts using Ansible or other available tools.
- Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
- Working on Cloud security tools to keep applications secured.
- Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve successful implementation of integrated solutions within the portfolio.
- Required Technical and Professional Expertise.
- Minimum 4-6 years of experience in IT industry.
- Expertise in implementing and managing Devops CI/CD pipeline.
- Experience in DevOps automation tools. And Very well versed with DevOps Frameworks, Agile.
- Working knowledge of scripting using shell, Python, Terraform, Ansible or puppet or chef.
- Experience and good understanding in any of Cloud like AWS, Azure, Google cloud.
- Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
- Experience with working with ticketing tools.
- Middleware technologies knowledge or database knowledge is desirable.
- Experience and well versed with Jira tool is a plus.
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. However, we assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.
The AWS Cloud/Devops Engineer will be working with the engineering team and focusing on AWS infrastructure and automation. A key part of the role is championing and leading infrastructure as code. The Engineer will work closely with the Manager of Operations and Devops to build, manage and automate our AWS infrastructure.
Duties & Responsibilities:
- Design cloud infrastructure that is secure, scalable, and highly available on AWS
- Work collaboratively with software engineering to define infrastructure and deployment requirements
- Provision, configure and maintain AWS cloud infrastructure defined as code
- Ensure configuration and compliance with configuration management tools
- Administer and troubleshoot Linux based systems
- Troubleshoot problems across a wide array of services and functional areas
- Build and maintain operational tools for deployment, monitoring, and analysis of AWS infrastructure and systems
- Perform infrastructure cost analysis and optimization
Qualifications:
- At least 1-5 years of experience building and maintaining AWS infrastructure (VPC, EC2, Security Groups, IAM, ECS, CodeDeploy, CloudFront, S3)
- Strong understanding of how to secure AWS environments and meet compliance requirements
- Expertise using Chef for configuration management
- Hands-on experience deploying and managing infrastructure with Terraform
- Solid foundation of networking and Linux administration
- Experience with CI-CD, Docker, GitLab, Jenkins, ELK and deploying applications on AWS
- Ability to learn/use a wide variety of open source technologies and tools
- Strong bias for action and ownership
Our client company is into Hospitality. (TH1)
- You have a Bachelor's degree in computer science or equivalent
- You have at least 7 years of DevOps experience.
- You have deep understanding of AWS and cloud architectures/services.
- You have expertise within the container and container orchestration space (Docker, Kubernetes, etc.).
- You have experience working with infrastructure provisioning tools like CloudFormation, Terraform, Chef, Puppet, or others.
- You have experience enabling CI/CD pipelines using tools such as Jenkins, AWS Code Pipeline, Gitlab, or others.
- You bring a deep understanding and application of computer science fundamentals: data structures, algorithms, and design patterns.
- You have a track record of delivering successful solutions and collaborating with others.
- You take security into account when building new systems.
- Having around 10+ years of Experience in IT industry in Software Development.
- Having strong Python full stack developer.
- Having work Experience in OpenStack,Ansible, Shellscript, Chef, Puppet, Docker,ELK,OpenTSDB, Kafka, Zookeeper, Grafan
- Having work experience in SDN/NFV
- Having work experience in AWS,AZURE Cloud
- Experience in Python, HTML, CSS
- Having work experience in Open source and Open Flow Controller(SDN).
- Having work experience in Zabbix, Nagiso, OpenNMs Monitoring tool.
- Experience in Aglie methodology.
- Having work experience in Type1&Type2 Hypervisors and KVM.
- Having good knowledge on OOP'S concepts and Django ORM.
- Good Knowledge on MySQL ,Postgresql, HDFS, Timeseries
- Basic Knowledge on Javascript and Jquery.
- Good Knowledge on ONOS,OpenKilda,Mininet.
- Having work experience in SDN/NFV, Orchestration
- Having work experience in Supply Chain Management system.
- Having work experience in MVT/MVC architecture.
- Having good knowledge networks, devices, service modeling and automation in systems.
- Having work experience on API & JSON implementation.
- Good Understanding of Software Development (i.e. SDLC)
- Good team player enthusiastic and quick learner
- Good interpersonal skill, commitment, result oriented with a quest and learn new technologies and understanding challenging tasks
Product organization that provides "Pick and drop services"
EXP:: 4 - 7 yrs
- Any scripting language:: Python, Scala, shell or bash
- Cloud:: AWS
- Database:: Relational (SQL) & non-relational (NoSQL)
- CI/CD tools and Version controlling
Our Client company is into Telecommunications(SY1)
- Cloud and virtualization-based technologies (Amazon Web Services (AWS), VMWare).
- Java Application Server Administration (Weblogic, WidlFfy, JBoss, Tomcat).
- Docker and Kubernetes (EKS)
- Linux/UNIX Administration (Amazon Linux and RedHat).
- Developing and supporting cloud infrastructure designs and implementations and guiding application development teams.
- Configuration Management tools (Chef or Puppet or ansible).
- Log aggregations tools such as Elastic and/or Splunk.
- Automate infrastructure and application deployment-related tasks using terraform.
- Automate repetitive tasks required to maintain a secure and up-to-date operational environment.
Responsibilities
- Build and support always-available private/public cloud-based software-as-a-service (SaaS) applications.
- Build AWS or other public cloud infrastructure using Terraform.
- Deploy and manage Kubernetes (EKS) based docker applications in AWS.
- Create custom OS images using Packer.
- Create and revise infrastructure and architectural designs and implementation plans and guide the implementation with operations.
- Liaison between application development, infrastructure support, and tools (IT Services) teams.
- Development and documentation of Chef recipes and/or ansible scripts. Support throughout the entire deployment lifecycle (development, quality assurance, and production).
- Help developers leverage infrastructure, application, and cloud platform features and functionality participate in code and design reviews, and support developers by building CI/CD pipelines using Bamboo, Jenkins, or Spinnaker.
- Create knowledge-sharing presentations and documentation to help developers and operations teams understand and leverage the system's capabilities.
- Learn on the job and explore new technologies with little supervision.
- Leverage scripting (BASH, Perl, Ruby, Python) to build required automation and tools on an ad-hoc basis.
Who we have in mind:
- Solid experience in building a solution on AWS or other public cloud services using Terraform.
- Excellent problem-solving skills with a desire to take on responsibility.
- Extensive knowledge in containerized application and deployment in Kubernetes
- Extensive knowledge of the Linux operating system, RHEL preferred.
- Proficiency with shell scripting.
- Experience with Java application servers.
- Experience with GiT and Subversion.
- Excellent written and verbal communication skills with the ability to communicate technical issues to non-technical and technical audiences.
- Experience working in a large-scale operational environment.
- Internet and operating system security fundamentals.
- Extensive knowledge of massively scalable systems. Linux operating system/application development desirable.
- Programming in scripting languages such as Python. Other object-oriented languages (C++, Java) are a plus.
- Experience with Configuration Management Automation tools (chef or puppet).
- Experience with virtualization, preferably on multiple hypervisors.
- BS/MS in Computer Science or equivalent experience.
- Excellent written and verbal skills.
Education or Equivalent Experience:
- Bachelor's degree or equivalent education in related fields
- Certificates of training in associated fields/equipment’s
This is for Product based organisation in Pune.
If you are looking for good opportunity in Cloud Development/Devops. Here is the right opportunity.
EXP: 4-10 YRs
Location:Pune
Job Type: Permanent
Minimum qualifications:
- Education: Bachelor-Master degree
- Proficient in English language.
Relevant experience:
- Should have been working for at least four years as a DevOps/Cloud Engineer
- Should have worked on AWS Cloud Environment in depth
- Should have been working in an Infrastructure as code environment or understands it very clearly.
- Has done Infrastructure coding using Cloudformation/Terraform and Configuration Management using Chef/Ansibleand Enterprise Bus(RabbitMQ/Kafka)
- Deep understanding of the microservice design and aware of centralized Caching(Redis), centralizedconfiguration(Consul/Zookeeper)
at Goodera
Goodera is looking for an experienced and motivated DevOps professional to be an integral part of its core infrastructure team. As a DevOps Engineer, you must be able to troubleshoot production issues, design, implement, and deploy monitoring tools, collaborate with team members to improve the existing and develop new engineering tools, optimize company's computing architecture, design and conduct security, performance, availability and availability tests.
Responsibilities:
This is a highly accountable role and the candidate must meet the following professional expectations:
• Owning and improving the scalability and reliability of our products.
• Working directly with product engineering and infrastructure teams.
• Designing and developing various monitoring system tools.
• Accountable for developing deployment strategies and build configuration management.
• Deploying and updating system and application software.
• Ensure regular, effective communication with team members and cross-functional resources.
• Maintaining a positive and supportive work culture.
• First point of contact for handling customer (may be internal stakeholders) issues, providing guidance and recommendations to increase efficiency and reduce customer incidents.
• Develop tooling and processes to drive and improve customer experience, create playbooks.
• Eliminate manual tasks via configuration management.
• Intelligently migrate services from one AWS region to other AWS regions.
• Create, implement and maintain security policies to ensure ISO/ GDPR / SOC / PCI compliance.
• Verify infrastructure Automation meets compliance goals and is current with disaster recovery plan.
• Evangelize configuration management and automation to other product developers.
• Keep himself updated with upcoming technologies to maintain the state of the art infrastructure.
Required Candidate profile :
• 3+ years of proven experience working in a DevOps environment.
• 3+ years of proven experience working in AWS Cloud environments.
• Solid understanding of networking and security best practices.
• Experience with infrastructure-as-code frameworks such as Ansible, Terraform, Chef, Puppet, CFEngine, etc.
• Experience in scripting or programming languages (Bash, Python, PHP, Node.js, Perl, etc.)
• Experience designing and building web application environments on AWS, including services such as ECS, ECR, Foregate, Lambda, SNS / SQS, CloudFront, Code Build, Code pipeline, Configuring CloudWatch, WAF, Active Directories, Kubernetes (EKS), EC2, S3, ELB, RDS, Redshift etc.
• Hands on Experience in Docker is a big plus.
• Experience working in an Agile, fast paced, DevOps environment.
• Strong Knowledge in DB such as MongoDB / MySQL / DynamoDB / Redis / Cassandra.
• Experience with Open Source and tools such as Haproxy, Apache, Nginx and Nagios etc.
• Fluency with version control systems with a preference for Git *
• Strong Linux-based infrastructures, Linux administration
• Experience with installing and configuring application servers such as WebLogic, JBoss and Tomcat.
• Hands-on in logging, monitoring and alerting tools like ELK, Grafana, Metabase, Monit, Zbbix etc.
• A team player capable of high performance, flexibility in a dynamic working environment and the ability to lead.
d ability to rain others on technical and procedural topics.
US based product engineering company
Required Skills and Experience
- 4+ years of relevant experience with DevOps tools Jenkins, Ansible, Chef etc
- 4+ years of experience in continuous integration/deployment and software tools development experience with Python and shell scripts etc
- Building and running Docker images and deployment on Amazon ECS
- Working with AWS services (EC2, S3, ELB, VPC, RDS, Cloudwatch, ECS, ECR, EKS)
- Knowledge and experience working with container technologies such as Docker and Amazon ECS, EKS, Kubernetes
- Experience with source code and configuration management tools such as Git, Bitbucket, and Maven
- Ability to work with and support Linux environments (Ubuntu, Amazon Linux, CentOS)
- Knowledge and experience in cloud orchestration tools such as AWS Cloudformation/Terraform etc
- Experience with implementing "infrastructure as code", “pipeline as code” and "security as code" to enable continuous integration and delivery
- Understanding of IAM, RBAC, NACLs, and KMS
- Good communication skills
Good to have:
- Strong understanding of security concepts, methodologies and apply them such as SSH, public key encryption, access credentials, certificates etc.
- Knowledge of database administration such as MongoDB.
- Knowledge of maintaining and using tools such as Jira, Bitbucket, Confluence.
- Work with Leads and Architects in designing and implementation of technical infrastructure, platform, and tools to support modern best practices and facilitate the efficiency of our development teams through automation, CI/CD pipelines, and ease of access and performance.
- Establish and promote DevOps thinking, guidelines, best practices, and standards.
- Contribute to architectural discussions, Agile software development process improvement, and DevOps best practices.
Roles and Responsibilities
- Managing Availability, Performance, Capacity of infrastructure and applications.
- Building and implementing observability for applications health/performance/capacity.
- Optimizing On-call rotations and processes.
- Documenting “tribal” knowledge.
- Managing Infra-platforms like Mesos/Kubernetes,CICD,Observability (Prometheus/New Relic/ELK),Cloud Platforms (AWS/ Azure),Databases,Data Platforms Infrastructure
- Providing help in onboarding new services with production readiness review process.
- Providing reports on services SLO/Error Budgets/Alerts and Operational Overhead.
- Working with Dev and Product teams to define SLO/Error Budgets/Alerts.
- Working with Dev team to have in depth understanding of the application architecture
and its bottlenecks.
- Identifying observability gaps in product services, infrastructure and working with stake
owners to fix it.
- Managing Outages and doing detailed RCA with developers and identifying ways to
avoid that situation.
- Managing/Automating upgrades of the infrastructure services.
- Automate toil work.
Experience & Skills
- 6+ years of total experience
- Experience as an SRE/DevOps/Infrastructure Engineer on large scale microservices and infrastructure.
- A collaborative spirit with the ability to work across disciplines to influence, learn, and
deliver.
- A deep understanding of computer science, software development, and networking principles.
- Demonstrated experience with languages, such as Python, Java, Golang etc.
- Extensive experience with Linux administration and good understanding the various
linux kernel subsystems (memory, storage, network etc).
- Extensive experience in DNS, TCP/IP, UDP, GRPC, Routing and Load Balancing.
- Expertise in GitOps, Infrastructure as a Code tools such as Terraform etc.. and
- Configuration Management Tools such as Chef, Puppet, Saltstack, Ansible.
- Expertise of Amazon Web Services (AWS) and/or other relevant Cloud Infrastructure
solutions like Microsoft Azure or Google Cloud.
- Experience in building CI/CD solutions with tools such as Jenkins, GitLab, Spinnaker,
Argo etc.
- Experience in managing and deploying containerized environments using Docker,
Mesos/Kubernetes is a plus.
About Us:
100ms is building a Platform-as-a-Service for developers integrating video-conferencing experiences into their apps. Our SDKs enable developers to add gold standard audio-video quality conferencing with much faster shipping times.
We are a team uniquely placed to work on this problem. We have built world-record scale live video infrastructure powering billions of live video minutes in a day. We are a remote-first global team with engineers who've built video teams at Facebook and Hotstar.
As part of the infrastructure team, you will be mainly responsible for looking after the cloud infrastructure.
You Will Be:
- Building and setting up new development tools and infrastructure
- Understanding the needs of stakeholders and conveying this to developers
- Driving centralized solutions like logging, rate limiting, service discovery
- Working on ways to automate and improve development and release processes
- Ensuring that systems are safe and secure against cybersecurity threats
You Have:
- Bachelor's degree or equivalent practical experience
- 4 years of professional software development experience, or 2 years with an advanced degree
- Expertise in managing large scale Cloud infrastructure, preferable AWS and Kubernetes
- Experience in developing applications using programming languages like Python, Golang and Ruby
- Hands on experience with prometheus, grafana, fluentd, splunk etc.
Good To Have:
- Knowledge of Terraform, Chef, Helm etc.,
- Ability to take on complex and ambiguous problems
- Strong inclination to keep up-to-date with latest trends, learn new concepts, or contribute to open-source projects and would be eager to talk about ideas in internal or external forum
You Will Gain:
- You'll be part of a small team at a fast-growing engineering-first startup
- You'll work with engineers across the globe with experience at Facebook and Hotstar
- You can grow as an individual contributor or as a team leader - freedom to set your own goals
- You'll work on problems at the cutting-edge of real-time video communication technology at massive scale
As DevOps Engineer, you'll be part of the team building the stage for our Software Engineers to work on, helping to enhance our product performance and reliability.
Responsibilities:
- Build & operate infrastructure to support website, backed cluster, ML projects in the organization.
- Helping teams become more autonomous and allowing the Operation team to focus on improving the infrastructure and optimizing processes.
- Delivering system management tooling to the engineering teams.
- Working on your own applications which will be used internally.
- Contributing to open source projects that we are using (or that we may start).
- Be an advocate for engineering best practices in and out of the company.
- Organizing tech talks and participating in meetups and representing Box8 at industry events.
- Sharing pager duty for the rare instances of something serious happening.
- Collaborate with other developers to understand & setup tooling needed for Continuous Integration/Delivery/Deployment (CI/CD) practices.
Requirements:
- 1+ Years Of Industry Experience Scale existing back end systems to handle ever increasing amounts of traffic and new product requirements.
- Ruby On Rails or Python and Bash/Shell skills.
- Experience managing complex systems at scale.
- Experience with Docker, rkt or similar container engine.
- Experience with Kubernetes or similar clustering solutions.
- Experience with tools such as Ansible or Chef Understanding of the importance of smart metrics and alerting.
- Hands on experience with cloud infrastructure provisioning, deployment, monitoring (we are on AWS and use ECS, ELB, EC2, Elasticache, Elasticsearch, S3, CloudWatch).
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Knowledge of data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience in working on linux based servers.
- Managing large scale production grade infrastructure on AWS Cloud.
- Good Knowledge on scripting languages like ruby, python or bash.
- Experience in creating in deployment pipeline from scratch.
- Expertise in any of the CI tools, preferably Jenkins.
- Good knowledge of docker containers and its usage.
- Using Infra/App Monitoring tools like, CloudWatch/Newrelic/Sensu.
Good to have:
- Knowledge of Ruby on Rails based applications and its deployment methodologies.
- Experience working on Container Orchestration tools like Kubernetes/ECS/Mesos.
- Extra Points For Experience With Front-end development NewRelic GCP Kafka, Elasticsearch.
Knowledge of Hadoop ecosystem installation, initial-configuration and performance tuning.
Expert with Apache Ambari, Spark, Unix Shell scripting, Kubernetes and Docker
Knowledge on python would be desirable.
Experience with HDP Manager/clients and various dashboards.
Understanding on Hadoop Security (Kerberos, Ranger and Knox) and encryption and Data masking.
Experience with automation/configuration management using Chef, Ansible or an equivalent.
Strong experience with any Linux distribution.
Basic understanding of network technologies, CPU, memory and storage.
Database administration a plus.
Qualifications and Education Requirements
2 to 4 years of experience with and detailed knowledge of Core Hadoop Components solutions and
dashboards running on Big Data technologies such as Hadoop/Spark.
Bachelor degree or equivalent in Computer Science or Information Technology or related fields.
- Mandatory: Docker, AWS, Linux, Kubernete or ECS
- Prior experience provisioning and spinning up AWS Clusters / Kubernetes
- Production experience to build scalable systems (load balancers, memcached, master/slave architectures)
- Experience supporting a managed cloud services infrastructure
- Ability to maintain, monitor and optimise production database servers
- Prior work with Cloud Monitoring tools (Nagios, Cacti, CloudWatch etc.)
- Experience with Docker, Kubernetes, Mesos, NoSQL databases (DynamoDB, Cassandra, MongoDB, etc)
- Other Open Source tools used in the infrastructure space (Packer, Terraform, Vagrant, etc.)
- In-depth knowledge on Linux Environment.
- Prior experience leading technical teams through the design and implementation of systems infrastructure projects.
- Working knowledge of Configuration Management (Chef, Puppet or Ansible preferred) Continuous Integration Tools (Jenkins preferred)
- Experience in handling large production deployments and infrastructure.
- DevOps based infrastructure and application deployments experience.
- Working knowledge of the AWS network architecture including designing VPN solutions between regions and subnets
- Hands-on knowledge with the AWS AMI architecture including the development of machine templates and blueprints
- He/she should be able to validate that the environment meets all security and compliance controls.
- Good working knowledge of AWS services such as Messaging, Application Services, Migration Services, Cost Management Platform.
- Proven written and verbal communication skills.
- Understands and can serve as the technical team lead to oversee the build of the Cloud environment based on customer requirements.
- Previous NOC experience.
- Client Facing Experience with excellent Customer Communication and Documentation Skills
Summary
We are building the fastest, most reliable & intelligent trading platform. That requires highly available, scalable & performant systems. And you will be playing one of the most crucial roles in making this happen.
You will be leading our efforts in designing, automating, deploying, scaling and monitoring all our core products.
Tech Facts so Far
1. 8+ services deployed on 50+ servers
2. 35K+ concurrent users on average
3. 1M+ algorithms run every min
4. 100M+ messages/min
We are a 4-member backend team with 1 Devops Engineer. Yes! this is all done by this incredible lean team.
Big Challenges for You
1. Manage 25+ services on 200+ servers
2. Achieve 99.999% (5 Nines) availability
3. Make 1-minute automated deployments possible
If you like to work on extreme scale, complexity & availability, then you will love it here.
Who are we
We are on a mission to help retail traders prosper in the stock market. In just 3 years, we have the 3rd most popular app for the stock markets in India. And we are aiming to be the de-facto trading app in the next 2 years.
We are a young, lean team of ordinary people that is building exceptional products, that solve real problems. We love to innovate, thrill customers and work with brilliant & humble humans.
Key Objectives for You
• Spearhead system & network architecture
• CI, CD & Automated Deployments
• Achieve 99.999% availability
• Ensure in-depth & real-time monitoring, alerting & analytics
• Enable faster root cause analysis with improved visibility
• Ensure a high level of security
Possible Growth Paths for You
• Be our Lead DevOps Engineer
• Be a Performance & Security Expert
Perks
• Challenges that will push you beyond your limits
• A democratic place where everyone is heard & aware
• Bachelor or Master Degree in Computer Science, Software Engineering from a reputed
University.
• 5 - 8 Years of experience in building scalable, secure and compliant systems.
• More than 2 years of experience in working with GCP deployment for millions of daily visitors
• 5+ years hosting experience in a large heavy-traffic environment
• 5+ years production application support experience in a high uptime environment
• Software development and monitoring knowledge with Automated builds
• Technology:
o Cloud: AWS or Google Cloud
o Source Control: Gitlab or Bitbucket or Github
o Container Concepts: Docker, Microservices
o Continuous Integration: Jenkins, Bamboos
o Infrastructure Automation: Puppet, Chef or Ansible
o Deployment Automation: Jenkins, VSTS or Octopus Deploy
o Orchestration: Kubernets, Mesos, Swarm
o Automation: Node JS or Python
o Linux environment network administration, DNS, firewall and security management
• Ability to be adapt to the startup culture, handle multiple competing priorities, meet
deadlines and troubleshoot problems.
Engineering group to plan ongoing feature development, product maintenance.
• Familiar with Virtualization, Containers - Kubernetes, Core Networking, Cloud Native
Development, Platform as a Service – Cloud Foundry, Infrastructure as a Service, Distributed
Systems etc
• Implementing tools and processes for deployment, monitoring, alerting, automation, scalability,
and ensuring maximum availability of server infrastructure
• Should be able to manage distributed big data systems such as hadoop, storm, mongoDB,
elastic search and cassandra etc.,
• Troubleshooting multiple deployment servers, Software installation, Managing licensing etc,.
• Plan, coordinate, and implement network security measures in order to protect data, software, and
hardware.
• Monitor the performance of computer systems and networks, and to coordinate computer network
access and use.
• Design, configure and test computer hardware, networking software, and operating system
software.
• Recommend changes to improve systems and network configurations, and determine hardware or
software requirements related to such changes.