About the compnay:
Our Client is a B2B2C tech Web3 startup founded by founders - IITB Graduates who are experienced in retail, ecommerce and fintech.
Vision: Our Client aims to change the way that customers, creators, and retail investors interact and transact at brands of all shapes and sizes. Essentially, becoming the Web3 version of brands driven social ecommerce & investment platform.
Role Description
We are looking for a DevOps Engineer responsible for managing cloud technologies, deployment
automation and CI /CD
Key Responsibilities
Building and setting up new development tools and infrastructure
Understanding the needs of stakeholders and conveying this to developers
Working on ways to automate and improve development and release processes
Testing and examining code written by others and analyzing results
Ensuring that systems are safe and secure against cybersecurity threats
Identifying technical problems and developing software updates and ‘fixes’
Working with software developers and software engineers to ensure that development
follows established processes and works as intended
Planning out projects and being involved in project management decisions
Required Skills and Qualifications
BE / MCA / B.Sc-IT / B.Tech in Computer Science or a related field.
4+ years of overall development experience.
Strong understanding of cloud deployment and setup.
Hands-on experience with tools like Jenkins, Gradle etc.
Deploy updates and fixes.
Provide Level 2 technical support.
Build tools to reduce occurrences of errors and improve customer experience.
Perform root cause analysis for production errors.
Investigate and resolve technical issues.
Develop scripts to automate deployment.
Design procedures for system troubleshooting and maintenance.
Proficient with git and git workflows.
Working knowledge of databases and SQL.
Problem-solving attitude.
Collaborative team spirit
Regards
Team Merito
Similar jobs
You will be responsible for:
- Managing all DevOps and infrastructure for Sizzle
- We have both cloud and on-premise servers
- Work closely with all AI and backend engineers on processing requirements and managing both development and production requirements
- Optimize the pipeline to ensure ultra fast processing
- Work closely with management team on infrastructure upgrades
You should have the following qualities:
- 3+ years of experience in DevOps, and CI/CD
- Deep experience in: Gitlab, Gitops, Ansible, Docker, Grafana, Prometheus
- Strong background in Linux system administration
- Deep expertise with AI/ML pipeline processing, especially with GPU processing. This doesn’t need to include model training, data gathering, etc. We’re looking more for experience on model deployment, and inferencing tasks at scale
- Deep expertise in Python including multiprocessing / multithreaded applications
- Performance profiling including memory, CPU, GPU profiling
- Error handling and building robust scripts that will be expected to run for weeks to months at a time
- Deploying to production servers and monitoring and maintaining the scripts
- DB integration including pymongo and sqlalchemy (we have MongoDB and PostgreSQL databases on our backend)
- Expertise in Docker-based virtualization including - creating & maintaining custom Docker images, deployment of Docker images on cloud and on-premise services, monitoring of production Docker images with robust error handling
- Expertise in AWS infrastructure, networking, availability
Optional but beneficial to have:
- Experience with running Nvidia GPU / CUDA-based tasks
- Experience with image processing in python (e.g. openCV, Pillow, etc)
- Experience with PostgreSQL and MongoDB (Or SQL familiarity)
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Bachelors or Masters degree in computer science or related field
- Ideally a gamer or someone interested in watching gaming content online
Skills:
DevOps, Ansible, CI/CD, GitLab, GitOps, Docker, Python, AWS, GCP, Grafana, Prometheus, python, sqlalchemy, Linux / Ubuntu system administration
Seniority: We are looking for a mid to senior level engineer
Salary: Will be commensurate with experience.
Who Should Apply:
If you have the right experience, regardless of your seniority, please apply.
Work Experience: 3 years to 6 years
Intuitive is now hiring for DevOps Consultant for full-time employment
DevOps Consultant
Work Timings: 6.30 AM IST – 3.30 PM IST ( HKT 9.00 AM – 6.00 PM)
Key Skills / Requirements:
- Mandatory
- Integrating Jenkins pipelines with Terraform and Kubernetes
- Writing Jenkins JobDSL and Jenkinsfiles using Groovy
- Automation and integration with AWS using Python and AWS CLI
- Writing ad-hoc automation scripts using Bash and Python
- Configuring container-based application deployment to Kubernetes
- Git and branching strategies
- Integration of tests in pipelines
- Build and release management
- Deployment strategies (eg. canary, blue/green)
- TLS/SSL certificate management
- Beneficial
- Service Mesh
- Deployment of Java Spring Boot based applications
- Automated credential rotation using AWS Secrets Manager
- Log management and dashboard creation using Splunk
- Application monitoring using AppDynamics
If Your profile matches the requirements share your resume at anitha.katintuitvedotcloud
Regards,
Anitha. K
TAG Specialist
- Responsible for building, managing, and maintaining deployment pipelines and developing self-service tooling formanaging Git, Linux, Kubernetes, Docker, CI/CD & Pipelining etc in cloud infrastructure
- Responsible for building and managing DevOps agile tool chain with
- Responsible for working as an integrator between developer teams and various cloud infrastructures.
Section 2
- Responsibilities include helping the development team with best practices, provisioning monitoring, troubleshooting, optimizing and tuning, automating and improving deployment and release processes.
Section 3
- Responsible for maintaining application security with perioding tracking and upgrading package dependencies in coordination with respective developer teams .
- Responsible for packaging and containerization of deploy units and strategizing it in coordination with developer team
Section 4
- Setting up tools and required infrastructure. Defining and setting development, test, release, update, and support processes for DevOps operation
- Responsible for documentation of the process.
- Responsible for leading projects with end to end execution
Qualification: Bachelors of Engineering /MCA Preferably with AWS Cloud certification
Ideal Candidate -
- is experienced between 2-4 years with AWS certification and DevOps
experience.
- age less than 30 years, self-motivated and enthusiastic.
- is interested in building a sustainable DevOps platform with maximum
automation
- is interested in learning and being challenged on day to day basis.
- who can take ownership of the tasks and is willing to take the necessary
action to get it done.
- who can solve complex problems.
- who is honest with their quality of work and is comfortable with taking
ownership of their success and failure, Both
Hands on Experience with Linux administration
Experience using Python or Shell scripting (for Automation)
Hands-on experience with Implementation of CI/CD Processes
Experience working with one cloud platforms (AWS or Azure or Google)
Experience working with configuration management tools such as Ansible & Chef
Experience working with Containerization tool Docker.
Experience working with Container Orchestration tool Kubernetes.
Experience in source Control Management including SVN and/or Bitbucket
& GitHub
Experience with setup & management of monitoring tools like Nagios, Sensu & Prometheus or any other popular tools
Hands-on experience in Linux, Scripting Language & AWS is mandatory
Troubleshoot and Triage development, Production issues
We are looking for a tech enthusiast to work in challenging environment, we are looking for a person who is self driven, proactive and has good experience in Azure, DevOps, Asp.Net etc. share your resume today if this interests you.
Job Location: Pune
About us:
JetSynthesys is a leading gaming and entertainment company with a wide portfolio of world class products, platforms, and services. The company has a robust foothold in the cricket community globally with its exclusive JV with Sachin Tendulkar for the popular Sachin Saga game and a 100% ownership of Nautilus Mobile - the developer of India’s largest cricket simulation game Real Cricket, becoming the #1 cricket gaming franchise in the world. Standing atop in the charts of organizations fueling Indian esports gaming industry, Jetsysthesys was the earliest entrant in the e-sports industry with a founding 50% stake in India’s largest esports company, Nodwin Gaming that is recently funded by popular South Korean gaming firm Krafton.
Recently, the company has developed WWE Racing Showdown, a high-octane vehicular combat game borne out of a strategic partnership with WWE. Adding to the list is the newly launched board game - Ludo Zenith, a completely reimagined ludo experience for gamers, built in partnership with Square Enix - a Japanese gaming giant.
JetSynthesys Pvt. Ltd. is proud to be backed by Mr. Adar Poonawalla - Indian business tycoon and CEO of Serum Institute of India, Mr. Kris Gopalakrishnan – Co-founder of Infosys and the family offices of Jetline Group of Companies. JetSynthesys’ partnerships with large gaming companies in the US, Europe and Japan give it an opportunity to build great products not only for India but also for the world.
Responsibilities
- As a Security & Azure DevOps engineer technical specialist you will be responsible for advising and assisting in the architecture, design and implementation of secure infrastructure solutions
- Should be capable of technical deep dives into infrastructure, databases, and application, specifically in operating, and supporting high-performance, highly available services and infrastructure
· Deep understanding of cloud computing technologies across Windows, with demonstrated hands-on experience on the following domains:
· Experience in building, deploying and monitoring Azure services with strong IaaS and PaaS services (Redis Cache, Service Bus, Event Hub, Cloud Service etc.)
· Understanding of API endpoint management
· Able to monitor, maintain serverless architect using function /web apps
· Logs analysis capabilities using elastic search and Kibana dashboard
· Azure Core Platform: Compute, Storage, Networking
· Data Platform: SQL, Cosmo DB, MongoDB and JQL query
· Identity and Authentication: SSO Federation, ADAzure AD etc
· Experience with Azure Storage, Backup and Express Route
· Hands-on experience in ARM templates
· Ability to write PowerShell & Python scripts to automate IT Operations
· Working Knowledge of Azure OMS and Configuration of OMS Dashboards is desired
· VSTS Deployments
· You will help assist in stabilizing developed solutions by understanding the relevant application development, infrastructure and operations implications of the developed solution.
· Use of DevOps tools to deliver and operate end-user services a plus (e.g., Chef, New Relic, Puppet, etc.)
· Able to deploy and re-create of resources
· Building terraforms for cloud infrastructure
· Having ITIL/ITASM standards as best practise
· PEN testing and OWASP security testing will be addon bonus
· Load and performance testing using Jmeter
· Capacity review on daily basis
· Handing repeated issues and solving them using ansible automation
· Jenkin pipelines for CI/CD
· Code review platform using sonarqube
· Cost analysis on regular basis to keep the system and resources optimum
· Experience on ASP.Net, C#, .Net programming is must.
Qualifications:
Minimum graduate (Preferred Stream: IT/Technical)
Experience and Education
• Bachelor’s degree in engineering or equivalent.
Work experience
• 4+ years of infrastructure and operations management
Experience at a global scale.
• 4+ years of experience in operations management, including monitoring, configuration management, automation, backup, and recovery.
• Broad experience in the data center, networking, storage, server, Linux, and cloud technologies.
• Broad knowledge of release engineering: build, integration, deployment, and provisioning, including familiarity with different upgrade models.
• Demonstratable experience with executing, or being involved of, a complete end-to-end project lifecycle.
Skills
• Excellent communication and teamwork skills – both oral and written.
• Skilled at collaborating effectively with both Operations and Engineering teams.
• Process and documentation oriented.
• Attention to details. Excellent problem-solving skills.
• Ability to simplify complex situations and lead calmly through periods of crisis.
• Experience implementing and optimizing operational processes.
• Ability to lead small teams: provide technical direction, prioritize tasks to achieve goals, identify dependencies, report on progress.
Technical Skills
• Strong fluency in Linux environments is a must.
• Good SQL skills.
• Demonstratable scripting/programming skills (bash, python, ruby, or go) and the ability to develop custom tool integrations between multiple systems using their published API’s / CLI’s.
• L3, load balancer, routing, and VPN configuration.
• Kubernetes configuration and management.
• Expertise using version control systems such as Git.
• Configuration and maintenance of database technologies such as Cassandra, MariaDB, Elastic.
• Designing and configuration of open-source monitoring systems such as Nagios, Grafana, or Prometheus.
• Designing and configuration of log pipeline technologies such as ELK (Elastic Search Logstash Kibana), FluentD, GROK, rsyslog, Google Stackdriver.
• Using and writing modules for Infrastructure as Code tools such as Ansible, Terraform, helm, customize.
• Strong understanding of virtualization and containerization technologies such as VMware, Docker, and Kubernetes.
• Specific experience with Google Cloud Platform or Amazon EC2 deployments and virtual machines.c
DevOps Engineer
Notice Period: 45 days / Immediate Joining
Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure. We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms. Our engagement service is built on the DevOps standard practice and SRE model.
We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. we offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.
Key Qualifications
· 4+ years of experience as a DevOps Engineer with monitoring, troubleshooting, and diagnosing infrastructure systems.
· Experience in implementation of continuous integration and deployment pipelines using Jenkins, JIRA, JFrog, etc
· Strong experience in Linux/Unix administration.
· Experience with automation/configuration management using Puppet, Chef, Ansible, Terraform, or other similar tools.
· Expertise in multiple coding and scripting languages including Shell, Python, and Perl
· Hands-on experience Exposure to modern IT infrastructure (eg. Docker swarm/Mesos/Kubernetes/Openstack)
· Exposure to any of relation database technologies MySQL/Postgres/Oracle or any No-SQL database
· Worked on open-source tools for logging, monitoring, search engine, caching, etc.
· Professional Certificates in AWS or any other cloud is preferable
· Excellent problem solving and troubleshooting skills
· Must have good written and verbal communication skills
Key Responsibilities
Ambitious individuals who can work under their own direction towards agreed targets/goals.
Must be flexible to work on the office timings to accommodate the multi-national client timings.
Will be involved in solution designing from the conceptual stages through development cycle and deployments.
Involve development operations & support internal teams
Improve infrastructure uptime, performance, resilience, reliability through automation
Willing to learn new technologies and work on research-orientated projects
Proven interpersonal skills while contributing to team effort by accomplishing related results as needed.
Scope and deliver solutions with the ability to design solutions independently based on high-level architecture.
Independent thinking, ability to work in a fast-paced environment with creativity and brainstorming
http://www.banyandata.com" target="_blank">www.banyandata.com
- Have 3+ years of experience in Python development
- Be familiar with common database access patterns
- Have experience with designing systems and monitoring metrics, looking at graphs.
- Have knowledge of AWS, Kubernetes and Docker.
- Be able to work well in a remote development environment.
- Be able to communicate in English at a native speaking and writing level.
- Be responsible to your fellow remote team members.
- Be highly communicative and go out of your way to contribute to the team and help others
Job Description
- Experienced in Cloud (AWS, Digital Ocean, Google& Azure) development and System Operations.
- Cloud Storage Services and prior experience in designing and building infrastructure components in Amazon. DigitalOcean (or other cloud providers).
- Strong Linux experience (Red Hat 6.x/CentOS 5.x & 6.x, Debian).
- Expert Knowledge in - Git, Jenkins
- Should have experience in CI/CD Automation using tools like Jenkins, Ansible, puppet/Mcollective, Chef Docker, Kubernetes, GIT, Ansible, Terraform, Packer, Hashicorp Vault Docker, Kubernetes, Python Scripting etc.
- Good working knowledge with scripting languages such as Python, php
- Good communications and Presentation Skills
Preferred Education:
Bachelor's Degree or global equivalent in Computer Science or related field