o At least 3 years hands-on experience with Red Hat Linux family • Virtualization and Cloud Computing: Senior level virtualization skills. Can perform most virtualization tasks with minimum assistance.• At least 12 months experience building container-based solutions using one or more of Openshift, Kubernetes, Docker, Helm.• At least 6 months hands-on experience with Linux KVM (libvirt, libguestfs, virsh, qemu, qemu-img, virtio) or equivalent experience using Xen, Oracle VM.• Understanding of cloud service models - IaaS/PaaS/Saas - considered a plus.Ansible - Technical Experience • Install Ansible/ Red Hat Ansible Engine on control nodes. • Create and update inventories of managed hosts and manage connections to them. • Automate administration tasks with Ansible playbooks and adhoc commands.• Write effective Ansible playbooks at scale• Protect sensitive data used by Ansible with Ansible Vault Reuse code and simplify playbook development with Ansible roles • Configure Ansible managed nodes • Create and distribute SSH keys to managed nodes• Configure privilege escalation on managed nodes• Create Ansible plays and playbooks• Know how to work with commonly used Ansible modules• Use variables to retrieve the results of running commands Deep Understanding core components of Ansible:• Inventories• Modules• Variables• Facts• Plays• Playbooks• Configuration files• Automation Development - Any kind of automation development in SDLC to bring effectiveness. Preferably using Ruby, Python, bash, etc. which can return JSON for Ansible.• Working knowledge of software repositories like github or bitbucket• Cloud - Knowledge on OpenStack, VMWare, AWS, Azure, SoftLayer, Google Cloud etc., will be further helpful. • Knowledge and experience with various cloud service and deployment models (ie: IaaS, PaaS, XaaS, on-premise, off-premise, etc.)• Sysadmin background (with both implementation and support experience in terms of infrastructure, servers, OSes, middleware and databases)• Understanding of several middleware such as Oracle, SAP, DB2, MySql, Apache, IIS, etc.,• Solid background with operating systems deployment and administration (various Linux/Unix flavours-RHEL/SLES, AIX as well as Windows Servers)Nice to Have:• Experience with Chef.• Experience with VMware technologies.• Experience in Software Development Projects using agile development methodologies (SCRUM).• Experience with the Azure, AWS and Google public clouds.Soft Skills (100% coverage required)• Strong spoken and written communication skills• Demonstrated ability to work in large teams with geography spread• Ability to lead small teams technically• Client facing experience and skills
Key Qualifications :- At least 2 years of hands-on experience with cloud infrastructure on AWS or GCP- Exposure to configuration management and orchestration tools at scale (e.g. Terraform, Ansible, Packer)- Knowledge in DevOps tools (e.g. Jenkins, Groovy, and Gradle)- Familiarity with monitoring and alerting tools(e.g. CloudWatch, ELK stack, Prometheus)- Proven ability to work independently or as an integral member of a teamPreferable Skills : - Familiarity with standard IT security practices such as encryption, credentials and key management- Proven ability to acquire various coding languages (Java, Python- ) to support DevOps operation and cloud transformation- Familiarity in web standards (e.g. REST APIs, web security mechanisms)- Multi-cloud management experience with GCP / Azure- Experience in performance tuning, services outage management and troubleshooting
4+ years of experience in understanding development practices, awareness of leading cloud technologies/trends to formulate new DevOps product catalog, devise deployment workflow and strategies, integrate dev tools for static and dynamic code analyses. Experience in any cloud but mandatory 1+ years in Azure. 2+ years of experience writing scripts for Azure or AWS deployments. 1+ years of experience using Kubernetes. Infrastructure provisioning tools expertise in a few tools such as Docker, Chef, Puppet, Ansible, Packer, CloudFormation, Terraform. Experience with application servers, web servers, and databases (Nginx, PostgreSQL, MongoDB, HA Proxy, Tomcat, Flash Media Server/ Red5, Redis, elasticate, etc.
- In-depth understanding of Infrastructure as Code using Terraform - Expert configuring AWS DevOps. - Good understanding of NFRs – Performance, Security, High Availability - Good understanding of API-Led architecture. - Expert configuring infrastructure deployment with AWS DevOps. - In-depth experience with AWS resources such as: AWS SQS, AWS DynamoDB, AWS Cloudwatch, Hive and Hive QL. - Experience with monitoring tools such as: CloudWatch, Grafana, El
Requirements and Qualifications Bachelor’s degree in Computer Science Engineering or in a related field 2+ years of experience Excellent analytical and problem-solving skills Strong knowledge of Linux systems and internals Programming experience in Python/Shell scripting Strong AWS skills with knowledge of EC2, VPC, S3, RDS, Cloudfront, Route53, etc Experience in containerization (Docker) and container orchestration (Kubernetes) Experience in DevOps & CI/CD tools such as Git, Jenkins, Terraform, Helm Experience with SQL & NoSQL databases such as MySql, MongoDB, and ElasticSearch Debugging and troubleshooting skills using tools such as strace, tcpdump, etc Good understanding of networking protocol and security concerns (VPN, VPC, IG, NAT, AZ, Subnet) Experience with monitoring and data analysis tools such as Prometheus, EFK, etc Good communication & collaboration skills and attention to details Participation in rotating on-call duties
We're building video-calling as a service.We are a managed video calling solution built on top of WebRTC, which allows it'scustomers to integrate live video-calls within existing solutions in less than 10 lines ofcode. We provide a completely managed SDK that solves the problem of battlingendless cases of video calling APIs.Location - Bangalore (Remote)Experience - 2+ YearsRequirements:● Should have at least 2+ years of DevOps experience● Should have experience with Kubernetes● Should have experience with Terraform/Helm● Should have experience in building scalable server-side systems● Should have experience in cloud infrastructure and designing databases● Having experience with NodeJS/TypeScript/AWS is a bonus● Having experience with WebRTC is a bonus
Job Description:Must to have Devops, Jenkins, Terraforms, shell scripts.Azure Cloud computingKnowledge of code versioning tools such as Git, Bitbucket, etcGood communication in both verbal and non verbalShould be team playerGood to have § proficiency in Azure Cloud Platforms§ Knowledge of build and deployment tools such as Jenkin, Ansible, etcHaving 4 to 6 years development experience on below job descriptionBand : B21) Must Have (Top 3 skills) : Devops, Jenkins and Terraforms2) Good To have : Azure Cloud computing, GIT and Shell scripts
DevOps Engineer Who we are? Searceis a niche ’Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully accelerated on Cloud. DevOps Engineer with experience in developing, automating and debugging digital CI/CD pipelines with help of a variety of automation tools and technology on Public Cloud. Responsibilities This position will be responsible to consult with clients and propose architectural solutions to help move & improve infra from on-premise to cloud or help optimize cloud spend from one public cloud to the other. ● Be the first one to experiment on new age cloud offerings, help define the best practise as a thought leader for cloud, automation & Dev-Ops, be a solution visionary and technology expert across multiple channels. ● Continually augment skills and learn new tech as the technology and client needs evolve. ● Use your experience in Google cloud platform, AWS or Microsoft Azure to build hybrid-cloud solutions for customers. ● Provide leadership to project teams, and facilitate the definition of project deliverables around core Cloud based technology and methods. ● Define tracking mechanisms and ensure IT standards and methodology are met; deliver quality results. ____________________________________________________________ ● Participate in technical reviews of requirements, designs, code and other artifacts. ● Identify and keep abreast of new technical concepts in google cloud platform. ● Security, Risk and Compliance - Advise customers on best practices around access management, network setup, regulatory compliance and related areas. Preferred Qualification Is Education overrated? Yes. We believe so. However there is no way to locate you otherwise. So unfortunately we might have to look for the Bachelor's or Master's degree in engineering from a reputed institute or you should be programming from 12. And the latter is better. We will find you faster if you specify the latter in some manner. Not just degree, but we are not too thrilled by tech certifications too ... :) To reiterate: Passion to tech-awesome, insatiable desire to learn the latest of the new-age cloud tech, highly analytical aptitude and a strong ‘desire to deliver' outlives those fancy degrees! 5 - 10 years of experience with at least 2 - 3 years of hands-on experience in Cloud Computing (AWS/GCP/Azure) and IT operational experience in a global enterprise environment Good analytical, communication, problem solving, and learning skills Knowledge on programming against cloud platforms such as Google Cloud Platform and lean development methodologies How Do We Work? It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER. 1. Humble: Happy people don’t carry ego around. We listen to understand; not to respond. ____________________________________________________________ Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about. Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it. Passionate: We are as passionate about the great vada-pao vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver. Innovative: Innovate or Die. We love to challenge the status quo. Experimental: We encourage curiosity & making mistakes. Responsible: Driven. Self motivated. Self governing teams. We own it. We welcome *really unconventional* creative thinkers who can work in an agile, flexible environment. We are a flat organization with unlimited growth opportunities, and small team sizes – wherein flexibility is a must, mistakes are encouraged, creativity is rewarded, and excitement is required.
What you'll do? Develop and Maintain IAC using Terraform and Ansible Draft design documents that translate requirements into code. Deal with challenges associated with scale. Assume responsibilities from technical design through technical client support. Manage expectations with internal stakeholders and context-switch in a fast paced environment. Thrive in an environment that uses Elasticsearch extensively. Keep abreast of technology and contribute to the engineering strategy. Champion best development practices and provide mentorship What we're looking for?An AWS Certified Engineer with strong skills in Terraform o Ansible *nix and shell scripting Elasticsearch Circle CI CloudFormation Python Packer Docker Prometheus and Grafana Challenges of scale Production support Sharp analytical and problem-solving skills. Strong sense of ownership. Demonstrable desire to learn and grow. Excellent written and oral communication skills. Mature collaboration and mentoring abilities.
Infrastructure/Devops Engineer ₹10L – ₹25L (Not a deciding factor-depends on your current CTC) About Pratilipi: Founded in 2015 by a set of reading enthusiasts, Pratilipi is a storytelling platform which brings readers, writers and their stories together. Something that started as a reading platform we are now operating across three more products - Pratilipi Comics, Pratilipi FM, and IVM podcasts. All the magic happens from a super cool office in Bangalore where we brainstorm, ideate, debate, and execute. Aggressive and fast paced, we motivate ourselves with meaningful disputes over infinite cups of tea. Product Details - Self-publishing Platform – 12 languages, 250K+ writers, 2.6M+ stories, 25m+ MAU, 1.7b minutes total time reading. We record 2M+ new followers, 5.5M+ new ratings, and 2M+ new reviews on the platform every month. We are larger than other Indian online literature platforms combined. Pratilipi FM – Our stories are now in audio format across 8 languages, 5000+ Hours of Content, 100K MAU Pratilipi Comics – Classic and in-house produced comics in 2 languages, 150+ Comic Series, 300K MAU IVM Podcasts - Largest Podcasting Studio in India, with 3M+ listens a month Total Funding : $30M About the Team: Our Engineering team works closely with the Product, Business and Data Science teams to build experiments & features that enable delightful experiences for our Users. Hence we fight a lot and party even more. We are a lean team that learns from each other and iterates rapidly to make a larger impact. Any point in time, we are seen making (and breaking) high user impact features in production. Our Micro-services architecture allows us to be Polyglot in the technology stack. We are completely hosted on AWS Platform. We are available to users via Web, Android and iOS. Our tech team consists of 25+ engineers who work in a high ownership driven environment, from fixing bugs to developing features to organizing stand up comedy shows in the office. With the organisation strength of 95+, we are looking for accountable engineers who fit into our culture and bring something new to the table, including party tricks. Meet the team Our Tech Stack Role : You will be handling the whole infrastructure for all the Products at Pratilipi. You will be helping us in Scaling the systems to reach 500M+ users in a Highly Optimised, Secure, Reliable & Cost effective way. Responsibilities : You will be Making deployment process super easy for engineers Managing and optimising the Configuration and Provisioning setup Taking Pratilipi to complete Infrastructure as Code setup Managing Logging, Monitoring & Alerting setup Defining Policies Management for the whole organisation Automating manual work-flows Making the whole Platform Highly Available Building Fault Tolerance in the Platform Making Pratilipi ready for Disaster Recovery Building Realtime systems Planning for the growth of Pratilipi Infrastructure A great user experience for engineers using the infrastructure Infrastructure as a Platform for all products at Pratilipi Experience: 2+yrs Must have: Good Communication Skills Expertise: AWS VPN, ECS, SG, VPC, ELB,AutoScaling,EC2,IAM Http/Https Protocol Unix Operating System Docker Jenkins Redis Cluster Management RDS Management Ansible Hands on experience: Networking, TCP layer Python or Shell scripting knowledge Terraform Prometheus ELK CDN Web Server, Proxy Server, Load Balancer Distributed Systems & cluster management Consul Batch Infrastructure Sql/NoSql Setup & cluster management WAF Good to have: Working knowledge of Nginx Security Layer Setup Cloud design patterns Machine learning infrastructure GCP exposure Kubernetes exposure Data Streaming exposure Benefits : Medical insurance You can choose Online courses that will help you grow Buy Books Mental Health Consultation You can participate in Company Employee Stock Options Program Do not apply if you are not: Ownership driven Curious in general Able to explain things in simple terms Self motivated Use to work in chaotic environment Highly proactive Hungry to grow anymore
Responsibilities: Orchestrate the provisioning, load balancing, dynamic config & monitoring of servers across cloud providers, data centres, & availability zones using Docker, K8S, Terraform, Ansible etc. Develop audit logging framework that processes logs into dashboards using ELK, Prometheus, Graphana etc. Work on hardening of infrastructure on AWS & setup different firewall applications to improve the overall security. Take care of different security and compliance requirements as mandated by the Fin-tech industry best practices. Automate the CI/CD pipeline covering all phases via automation. Requirements: Experience as a DevOps Engineer with hands-on experience with automating all stages of the software development lifecycle Working knowledge of microservices-based architecture & Docker, Terraform, Kubernetes Good Understanding of Configuration Management (Ansible, Chef, Puppet etc.) You have experience using technologies such as Prometheus, Grafana, ELK stack, etc. Proficient in developing CI/CD pipelines using tools such as Jenkins, Gocd, Gitlab, etc. Experience Nginx, Apache HTTP Server, Apache Tomcat Previous experience setting up automated tests for compliance and regulatory frameworks(PSD2, PCI DSS) Domain expertise in Fintech The location is Bangalore
Overview: Hands-on technologists, who will help us architect and design of large-scale, multi-tiered, microservice based distributed software systems and services using object-oriented design, distributed programming, PHP, NodeJS, Python etc. Responsibilities: Collaborate with cross functional teams, gathering business and functional requirements, to solve complex problems by building high volume transactional systems that handle massive data and high traffic. Be a part of the full development life cycle, end-to-end from scoping, effort estimation, risk identification, implementation and testing, while meeting project schedules and timelines. Work on (and recommend) the best technologies, components on server side tech. Proactively identify architectural weaknesses, design issues, performance and scalability issues, and recommend solutions. Complete ownership of problem-free execution of owned modules and solutions, with focus on code optimisation, code quality, maintainability etc. Balance short-term versus long-term actions, strategic versus tactical requirements, while continuing to move forward towards the strategic vision. Requirements: BE/B.Tech in computer science or equivalent experience from top college, with 3-6 years of hands-on experience in design/development of building massively large scale, microservice based, high traffic Node.js applications (including Socket.io and Express or similar) Excellent data structure & algorithm and problem solving skills, full-stack knowledge and (preferred) application architectural experience and/or understanding Proficiency in multiple programming language, both dynamic and strongly typed object oriented languages. Knowledge of one or more of: PHP, Python, Couchbase, DynamoDB, NoSQL, Terraform, Cassandra, Redis, AWS etc. End user-focused, react well to changes, work with teams and able to multi-task while enjoying challenging assignments in a high-energy, fast growing and start-up workplace. Familiarity with Agile development, Scrums, continuous integration, and test driven development processes Prior startup experience of large scale B2C product company background is preferred
Hello Network - Hiring Alert!Searce Inc is looking for a Director - Cloud Engineering. ### You are a great fit if- You have worked on environments of all shapes and sizes. Onprem, private, public cloud, Hybrid, all windows / linux / healthy mix. Thanks to this experience, you can connect the dots quickly and understand customer pain points- **You are curious**. You keep up with the breakneck pace of innovation of Public cloud. Your try and learn new things.- **You are hands-on**. Not content with just making architecture diagrams, you take pleasure in bringing them to life.### skillset- Min 3 + years of experience on GCP / AWS / Azure. ( Overall 8+ years of experience)- Python, Terraform / Pulumi, CF, Ansible- Kubernetes, GKE, EKS.- Jenkins, Spinnaker, Prometheus, Grafana etc.- Strong Cloud Architecture experience.- Cloud Security- Relational & NoSQL databases- Cloud Economics. Ability to compare services across public cloud platforms and understand pros and cons- Ability to deliver technology deep dive sessions, workshops### Nice to have- Certifications - GCP and/or AWS, professional level- CNCF- Your contributions to the community - tech blog, stackoverflow etc.
The SA must have experience in the following areas: GCP Infrastructure as code (knowledge of terraform, etc.) and how to deploy GCP Service Architecture and Tooling knowledge Some experience with GCP Big Query to allow for strategies when online services need to query for data (scale cost vs fixed) Experience with online Google tools ecosystem (Tag Manager, Optimise, GMP) Some knowledge of network infrastructure on GCP Working with security gateways Working with elastic infrastructure and container technologies is desirable
Azure Devops - Working experience in Azure yaml pipelines. (Note – Some say they worked in yaml but it’s for Jenkins and not Azure devops) Azure – Infrastructure automation using Terraform/ARM templates. (Note – Some say that they worked in terraform but it’s for AWS and not Azure). Please confirm Terraform for Azure infrastructure automation. Powershell scripting to automate and deploy .Net applications.
Requirements and Qualifications 2+ years of experience Excellent analytical and problem-solving skills Strong knowledge of Linux systems and internals Programming experience in Python/Shell scripting Strong AWS skills with knowledge of EC2, VPC, S3, RDS, Cloudfront,Route53, etc Experience in containerization (Docker) and container orchestration(Kubernetes) Experience in DevOps & CI/CD tools such as Git, Jenkins, Terraform,Helm Experience with SQL & NoSQL databases such as MySql, MongoDB,and ElasticSearch Debugging and troubleshooting skills using tools such as strace,tcpdump, etc Good understanding of networking protocol and security concerns(VPN, VPC, IG, NAT, AZ, Subnet) Experience with monitoring and data analysis tools such asPrometheus, EFK, etc Participation in rotating on-call duties
skills : DevOps ,Bash , Python , Docker, Kubernetes, Ansible, Terraform ( work from zero or scratch, startup , no team ,first member to take up the challenge )Job Description:• 4+ years in DevOps• Exposure to at least one major cloud platform like AWS, GCP or Azure with commonly used services.• Hands-on experience with deployment and orchestration technologies like Docker, Kubernetes, Ansible, and Terraform.Scripting language like Bash or Python
The Dev Ops Engineer will be responsible for automating the release, deployment, and management of customer-facing products from development through production. This position's primary focus will be working collaboratively with developers and QA engineers to architect and build automated provisioning, configuration management, and service orchestration systems within internal corporate and public cloud environments. As a key member of the Operations team, the Dev Ops Engineer will have the opportunity to work in-depth with cutting-edge technologies. The successful candidate will enjoy working for an innovative Company with smart, passionate people who work hard, have fun, and are invested in the success of Codehall Technology and its future! Responsibilities Automation and management of AWS cloud-based production system Ensuring availability, performance, and scalability of AWS production systems Evaluation of new technology alternatives and vendor products System troubleshooting and problem resolution across various application domains Provision of critical system security by leveraging best practices for cloud security Providing recommendations for architecture and process improvements Definition and deployment of systems for metrics, logging, and monitoring Utilizing and managing the right automation tools for different operational processes Bring a positive, can-do attitude and be a team-player Qualifications 5+ years of experience in the design, implementation, maintenance, and management of cloud infrastructure (Linux/Docker/Kubernetes/Microservices) Hands-on experience using a broad range of AWS services including IAM, EC2, and EKS Hands-on experience developing Infrastructure as Code using tools like Terraform Hands-on experience using and automating tools for logging/monitoring In-depth experience with CI/CD pipelines Understanding of cloud security best practices Well versed in container technologies and networking Strong programming background, Python preferred Strong analytical and debugging skills BS/MS degree in Computer Science or related field.