Looking for a DevOps lead, who is an infrastructure framework enthusiast.EXP:: 4 - 7 yrs - Any scripting language:: Python, Scala, shell or bash- Cloud:: AWS- Database:: Relational (SQL) & non-relational (NoSQL)- CI/CD tools and Version controlling
Roles & Responsibilities:● Deploy updates and fixes.● Investigate and resolve technical issues.● Automate tasks to reduce manual interactions to servers.● Design procedures for system troubleshooting and maintenance.● Utilize various open source technologies.● Use various tools to orchestrate solutions.● Configure and manage data sources like MySQL,Elasticsearch, Redis.● Team player and ready to work in shifts if needed.● Write scripts for automation using Shell, Python.● Able to support whenever a team requires DevOps help (Sometimes during weekends).Requirements:● Cloud Platform: Experience on Ubuntu/CENTOS/RHEL.● 2 - 4 years of strong experience with Linux-based infrastructures, Linux/Unix administration.● 2 - 4 years experience with scripting languages if any (Bash/Python), mandatory.● Work experience in configuration management (Ansible, Chef), good to have.● Stellar troubleshooting skills with the ability to spot issues before they become problems.● Time and project management skills, with the capability to prioritize and multitask as needed.● Solid team player.● Implement automation tools and frameworks (CI/CD pipelines - Jenkins good to have.● Actively monitoring of the infrastructure.● Experience in handling microservices and knowledge on Docker is preferable.● Actively handling Release Management throughout the day. (Should be available for release afteroffice hours.)● Should have experience in Docker, Kubernetes and its related CI/CD
4+ years of experience in understanding development practices, awareness of leading cloud technologies/trends to formulate new DevOps product catalog, devise deployment workflow and strategies, integrate dev tools for static and dynamic code analyses. Experience in any cloud but mandatory 1+ years in Azure. 2+ years of experience writing scripts for Azure or AWS deployments. 1+ years of experience using Kubernetes. Infrastructure provisioning tools expertise in a few tools such as Docker, Chef, Puppet, Ansible, Packer, CloudFormation, Terraform. Experience with application servers, web servers, and databases (Nginx, PostgreSQL, MongoDB, HA Proxy, Tomcat, Flash Media Server/ Red5, Redis, elasticate, etc.
JOB DESCRIPTION About Us: Led by former Salesforce.com and Siebel executive, Chuck Ganapathi, Tact.ai is on a mission to make enterprise software more human-friendly. Tact.ai’s conversational AI sales platform is used by sales teams at GE, Cisco Systems, Kelly Services, and other Fortune 500 companies to drive revenue growth by eliminating friction in their daily sales workflow. Headquartered in Sunnyvale, CA., Tact.ai Technologies, Inc. is a privately-held company backed by Accel Partners, Redpoint Ventures, Upfront Ventures, M12 (formerly Microsoft Ventures), Comcast Ventures, Salesforce Ventures, and the Amazon Alexa Fund. Responsibilities: We’re looking for a passionate, security-minded, DevOps/IT Engineer with AWS, Azure, or GCP experience to make an immediate impact on our operations team. We are building a global infrastructure and need your help to do it. Some of your main responsibilities include: Help us continue our journey of building a fully automate infrastructure Help us design and build a comprehensive globally distributed system Maintain and improve our cloud infrastructure, orchestration, and deployment process Assess existing infrastructure, Identify problems, propose and develop solutions Requirements: 7+ years of DevOps / Operations with a focus on security, automation, and cloud Extensive experience in cloud infrastructure (architecture-to-deployment) Collaborate with development to solve complex engineering challenges Passion for clean and elegant code Experience shipping code (Ruby/Java/JS) Extensive experience developing Infrastructure-as-Code (Terraform), orchestrating containers (Kubernetes), and deployment strategies (Spinnaker) Experience with configuration management tools (Ansible, Chef, or Puppet) Excellent overall understanding of systems and networks and the ability to troubleshoot Strong problem-solving skills and willingness to work with various teams to find solutions Be part of on-call rotation and provide operational coverage Perks: Competitive salary Stock options Generous leave policy Insurance coverage Open & flexible work culture High growth professional environment Preference: Can join immediately or within 1 month Important Note:* Keep the resume to only 2-3 pages Explore more: Website: https://tact.ai | Linkedin: https://www.linkedin.com/company/tact.ai/
Cloud and virtualization-based technologies (Amazon Web Services (AWS), VMWare). Java Application Server Administration (Weblogic, WidlFfy, JBoss, Tomcat). Docker and Kubernetes (EKS) Linux/UNIX Administration (Amazon Linux and RedHat). Developing and supporting cloud infrastructure designs and implementations and guiding application development teams. Configuration Management tools (Chef or Puppet or ansible). Log aggregations tools such as Elastic and/or Splunk. Automate infrastructure and application deployment-related tasks using terraform. Automate repetitive tasks required to maintain a secure and up-to-date operational environment. Responsibilities Build and support always-available private/public cloud-based software-as-a-service (SaaS) applications. Build AWS or other public cloud infrastructure using Terraform. Deploy and manage Kubernetes (EKS) based docker applications in AWS. Create custom OS images using Packer. Create and revise infrastructure and architectural designs and implementation plans and guide the implementation with operations. Liaison between application development, infrastructure support, and tools (IT Services) teams. Development and documentation of Chef recipes and/or ansible scripts. Support throughout the entire deployment lifecycle (development, quality assurance, and production). Help developers leverage infrastructure, application, and cloud platform features and functionality participate in code and design reviews, and support developers by building CI/CD pipelines using Bamboo, Jenkins, or Spinnaker. Create knowledge-sharing presentations and documentation to help developers and operations teams understand and leverage the system's capabilities. Learn on the job and explore new technologies with little supervision. Leverage scripting (BASH, Perl, Ruby, Python) to build required automation and tools on an ad-hoc basis. Who we have in mind: Solid experience in building a solution on AWS or other public cloud services using Terraform. Excellent problem-solving skills with a desire to take on responsibility. Extensive knowledge in containerized application and deployment in Kubernetes Extensive knowledge of the Linux operating system, RHEL preferred. Proficiency with shell scripting. Experience with Java application servers. Experience with GiT and Subversion. Excellent written and verbal communication skills with the ability to communicate technical issues to non-technical and technical audiences. Experience working in a large-scale operational environment. Internet and operating system security fundamentals. Extensive knowledge of massively scalable systems. Linux operating system/application development desirable. Programming in scripting languages such as Python. Other object-oriented languages (C++, Java) are a plus. Experience with Configuration Management Automation tools (chef or puppet). Experience with virtualization, preferably on multiple hypervisors. BS/MS in Computer Science or equivalent experience. Excellent written and verbal skills. Education or Equivalent Experience: Bachelor's degree or equivalent education in related fields Certificates of training in associated fields/equipment’s
Responsibilities: Orchestrate the provisioning, load balancing, dynamic config & monitoring of servers across cloud providers, data centres, & availability zones using Docker, K8S, Terraform, Ansible etc. Develop audit logging framework that processes logs into dashboards using ELK, Prometheus, Graphana etc. Work on hardening of infrastructure on AWS & setup different firewall applications to improve the overall security. Take care of different security and compliance requirements as mandated by the Fin-tech industry best practices. Automate the CI/CD pipeline covering all phases via automation. Requirements: Experience as a DevOps Engineer with hands-on experience with automating all stages of the software development lifecycle Working knowledge of microservices-based architecture & Docker, Terraform, Kubernetes Good Understanding of Configuration Management (Ansible, Chef, Puppet etc.) You have experience using technologies such as Prometheus, Grafana, ELK stack, etc. Proficient in developing CI/CD pipelines using tools such as Jenkins, Gocd, Gitlab, etc. Experience Nginx, Apache HTTP Server, Apache Tomcat Previous experience setting up automated tests for compliance and regulatory frameworks(PSD2, PCI DSS) Domain expertise in Fintech The location is Bangalore
Tech Lead- Skills Required●Evidence of successful development/engineering team leadership experience.●Experience of communicating and collaborating in globally distributed teams.●Ability to write robust, maintainable code in Python and/or Perl.●Extensive knowledge of Linux, including familiarity with C, UNIX system calls, and low-level O/S and network protocols. Also block, file and object storage protocols●Experience of using a modern configuration management system, such as Salt Stack, Puppet, or Chef.●Effective troubleshooting skills across hardware, O/S, network, and storage.Skills Desired●Enthusiasm for modern dev tools & practices including Git, Jenkins, automated testing, and continuous integration.●Management of external vendor resources
Skills Required●Extensive experience of Linux, including familiarity with C, UNIX system calls, and low-level O/S and network protocols. Also block, file and object storage protocols.●Experience of using a modern configuration management system (examples such as Ansible, Salt Stack, Puppet, or Chef) to automate the management of a large-scale Linux deployment.●Effective troubleshooting skills across hardware, O/S, network, and storage.Skills Desired●Ability to write robust, maintainable code in Python and/or Perl.●Experience working in a large, multi-national enterprise in any industry vertical, showing experience of communicating and collaborating in globally distributed teams.●Enthusiasm for modern development tools and practices including Git, Jenkins, automated testing, and continuous integration.●Experience of designing, implementing and supporting large scale production IaaS platforms.●Knowledge of building and managing Docker containers in a secure manner.
DevOps/Automation Engineer The DevOps Engineer is a member of the Cloud DevOps Team and will work toward the team’s ongoing goals of bringing DevOps culture, processes and tools . Minimum 3+ years in below job role Job Role : Establish a microservices plan with regard to data services. Develop automation scripts for database creation, management and migration. Set up integration environments, and provision databases for new projects. Dynamically extract, transform and load data from one silo to others, employing TTL strategies to reduce the data or enrich some of it. Continuously monitor and optimize performance: Strive for low and consistent tail latency and a joint view of the app and the database. Expert zone: Automate scale out and automate schema changes. Backup and recovery: Backup to cold storage and restore. Plan for downtime and high availability using multiple data center disaster recovery capabilities. Technical Skill Set required: Designing and Writing Chef Cookbooks – MUST HAVE Chef Orchestration and Configuration is must – MUST HAVE Experience in creating Ansible playbooks for infrastructure build/management automation– Highly Desirable Oracle database Administration using Chef and Ansible – Highly Desirable Strong in database concepts of leading rdbms and nosql platforms. Experience in setting up Docker and Containers and troubleshooting HA Orchestration platforms Strong Experience working on Unix based platforms and Writing Scripts using Shell and Python
AWS DevOps EngineerGoodera is looking for an experienced and motivated DevOps professional to be an integral part of its core infrastructure team. As a DevOps Engineer, you must be able to troubleshoot production issues, design, implement, and deploy monitoring tools, collaborate with team members to improve the existing and develop new engineering tools, optimize company's computing architecture, design and conduct security, performance, availability and availability tests.Responsibilities:This is a highly accountable role and the candidate must meet the following professional expectations:• Owning and improving the scalability and reliability of our products.• Working directly with product engineering and infrastructure teams.• Designing and developing various monitoring system tools.• Accountable for developing deployment strategies and build configuration management.• Deploying and updating system and application software.• Ensure regular, effective communication with team members and cross-functional resources.• Maintaining a positive and supportive work culture.• First point of contact for handling customer (may be internal stakeholders) issues, providing guidance and recommendations to increase efficiency and reduce customer incidents.• Develop tooling and processes to drive and improve customer experience, create playbooks.• Eliminate manual tasks via configuration management.• Intelligently migrate services from one AWS region to other AWS regions.• Create, implement and maintain security policies to ensure ISO/ GDPR / SOC / PCI compliance.• Verify infrastructure Automation meets compliance goals and is current with disaster recovery plan.• Evangelize configuration management and automation to other product developers.• Keep himself updated with upcoming technologies to maintain the state of the art infrastructure.Required Candidate profile : • 3+ years of proven experience working in a DevOps environment.• 3+ years of proven experience working in AWS Cloud environments.• Solid understanding of networking and security best practices.• Experience with infrastructure-as-code frameworks such as Ansible, Terraform, Chef, Puppet, CFEngine, etc.• Experience in scripting or programming languages (Bash, Python, PHP, Node.js, Perl, etc.)• Experience designing and building web application environments on AWS, including services such as ECS, ECR, Foregate, Lambda, SNS / SQS, CloudFront, Code Build, Code pipeline, Configuring CloudWatch, WAF, Active Directories, Kubernetes (EKS), EC2, S3, ELB, RDS, Redshift etc.• Hands on Experience in Docker is a big plus.• Experience working in an Agile, fast paced, DevOps environment.• Strong Knowledge in DB such as MongoDB / MySQL / DynamoDB / Redis / Cassandra.• Experience with Open Source and tools such as Haproxy, Apache, Nginx and Nagios etc.• Fluency with version control systems with a preference for Git *• Strong Linux-based infrastructures, Linux administration • Experience with installing and configuring application servers such as WebLogic, JBoss and Tomcat.• Hands-on in logging, monitoring and alerting tools like ELK, Grafana, Metabase, Monit, Zbbix etc.• A team player capable of high performance, flexibility in a dynamic working environment and the ability to lead.d ability to rain others on technical and procedural topics.
Roles and Responsibilities Managing Availability, Performance, Capacity of infrastructure and applications. Building and implementing observability for applications health/performance/capacity. Optimizing On-call rotations and processes. Documenting “tribal” knowledge. Managing Infra-platforms like Mesos/Kubernetes,CICD,Observability (Prometheus/New Relic/ELK),Cloud Platforms (AWS/ Azure),Databases,Data Platforms Infrastructure Providing help in onboarding new services with production readiness review process. Providing reports on services SLO/Error Budgets/Alerts and Operational Overhead. Working with Dev and Product teams to define SLO/Error Budgets/Alerts. Working with Dev team to have in depth understanding of the application architecture and its bottlenecks. Identifying observability gaps in product services, infrastructure and working with stake owners to fix it. Managing Outages and doing detailed RCA with developers and identifying ways to avoid that situation. Managing/Automating upgrades of the infrastructure services. Automate toil work. Experience & Skills 6+ years of total experience Experience as an SRE/DevOps/Infrastructure Engineer on large scale microservices and infrastructure. A collaborative spirit with the ability to work across disciplines to influence, learn, and deliver. A deep understanding of computer science, software development, and networking principles. Demonstrated experience with languages, such as Python, Java, Golang etc. Extensive experience with Linux administration and good understanding the various linux kernel subsystems (memory, storage, network etc). Extensive experience in DNS, TCP/IP, UDP, GRPC, Routing and Load Balancing. Expertise in GitOps, Infrastructure as a Code tools such as Terraform etc.. and Configuration Management Tools such as Chef, Puppet, Saltstack, Ansible. Expertise of Amazon Web Services (AWS) and/or other relevant Cloud Infrastructure solutions like Microsoft Azure or Google Cloud. Experience in building CI/CD solutions with tools such as Jenkins, GitLab, Spinnaker, Argo etc. Experience in managing and deploying containerized environments using Docker, Mesos/Kubernetes is a plus.
Requirements :- Experience with various Cloud development methodologies and virtual environments. 2-4 years experience with Container Orchestration ( Docker, Kubernetes) 2-4 years experience with Configuration Management Tooling (Chef, Puppet, Ansible, Jenkins) Strong foundation in programming, algorithms, software application design and data base knowledge. Experience in building dashboards and aggregating metrics. Responsibilities :- Maintain, optimize and grow our production, staging and test infrastructure hosted in VMinfra and Openstack. Develop automation scripts for manage the infra security and performance monitoring of application using Ansible , Chef, Puppet, Nagios, Splunk. Server and Application patch management for Unix/ Linux and widows based production servers. Strong working experience on Linux , windows operating systems and good understanding on software based storage platform Ceph with Openstack cloud. Hands-on programming experience on Python, Ruby, Powershell, Shell scripting, and Perl scripting, Python scripting is must with expert level. Required self-motivated and critical thinking to build our own tools to reduce occurrences of human errors and improve overall customer experience. Good oral and written communication skills are required for Involve in root cause analysis of production errors. Investigate high level production issues to help development and support team to resolve technical issues on time. Enhancing automation infrastructure to integrate test automation into the CI/CD cycle using different environments Interface with various R&D groups (Architects, Developers, Product Managers)
About Us: 100ms is building a Platform-as-a-Service for developers integrating video-conferencing experiences into their apps. Our SDKs enable developers to add gold standard audio-video quality conferencing with much faster shipping times. We are a team uniquely placed to work on this problem. We have built world-record scale live video infrastructure powering billions of live video minutes in a day. We are a remote-first global team with engineers who've built video teams at Facebook and Hotstar. As part of the infrastructure team, you will be mainly responsible for looking after the cloud infrastructure. You Will Be: Building and setting up new development tools and infrastructure Understanding the needs of stakeholders and conveying this to developers Driving centralized solutions like logging, rate limiting, service discovery Working on ways to automate and improve development and release processes Ensuring that systems are safe and secure against cybersecurity threats You Have: Bachelor's degree or equivalent practical experience 4 years of professional software development experience, or 2 years with an advanced degree Expertise in managing large scale Cloud infrastructure, preferable AWS and Kubernetes Experience in developing applications using programming languages like Python, Golang and Ruby Hands on experience with prometheus, grafana, fluentd, splunk etc. Good To Have: Knowledge of Terraform, Chef, Helm etc., Ability to take on complex and ambiguous problems Strong inclination to keep up-to-date with latest trends, learn new concepts, or contribute to open-source projects and would be eager to talk about ideas in internal or external forum You Will Gain: You'll be part of a small team at a fast-growing engineering-first startup You'll work with engineers across the globe with experience at Facebook and Hotstar You can grow as an individual contributor or as a team leader - freedom to set your own goals You'll work on problems at the cutting-edge of real-time video communication technology at massive scale
As DevOps Engineer, you'll be part of the team building the stage for our Software Engineers to work on, helping to enhance our product performance and reliability. Responsibilities: Build & operate infrastructure to support website, backed cluster, ML projects in the organization. Helping teams become more autonomous and allowing the Operation team to focus on improving the infrastructure and optimizing processes. Delivering system management tooling to the engineering teams. Working on your own applications which will be used internally. Contributing to open source projects that we are using (or that we may start). Be an advocate for engineering best practices in and out of the company. Organizing tech talks and participating in meetups and representing Box8 at industry events. Sharing pager duty for the rare instances of something serious happening. Collaborate with other developers to understand & setup tooling needed for Continuous Integration/Delivery/Deployment (CI/CD) practices. Requirements: 1+ Years Of Industry Experience Scale existing back end systems to handle ever increasing amounts of traffic and new product requirements. Ruby On Rails or Python and Bash/Shell skills. Experience managing complex systems at scale. Experience with Docker, rkt or similar container engine. Experience with Kubernetes or similar clustering solutions. Experience with tools such as Ansible or Chef Understanding of the importance of smart metrics and alerting. Hands on experience with cloud infrastructure provisioning, deployment, monitoring (we are on AWS and use ECS, ELB, EC2, Elasticache, Elasticsearch, S3, CloudWatch). Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Knowledge of data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Experience in working on linux based servers. Managing large scale production grade infrastructure on AWS Cloud. Good Knowledge on scripting languages like ruby, python or bash. Experience in creating in deployment pipeline from scratch. Expertise in any of the CI tools, preferably Jenkins. Good knowledge of docker containers and its usage. Using Infra/App Monitoring tools like, CloudWatch/Newrelic/Sensu. Good to have: Knowledge of Ruby on Rails based applications and its deployment methodologies. Experience working on Container Orchestration tools like Kubernetes/ECS/Mesos. Extra Points For Experience With Front-end development NewRelic GCP Kafka, Elasticsearch.
Skills Requirements Knowledge of Hadoop ecosystem installation, initial-configuration and performance tuning. Expert with Apache Ambari, Spark, Unix Shell scripting, Kubernetes and Docker Knowledge on python would be desirable. Experience with HDP Manager/clients and various dashboards. Understanding on Hadoop Security (Kerberos, Ranger and Knox) and encryption and Data masking. Experience with automation/configuration management using Chef, Ansible or an equivalent. Strong experience with any Linux distribution. Basic understanding of network technologies, CPU, memory and storage. Database administration a plus.Qualifications and Education Requirements 2 to 4 years of experience with and detailed knowledge of Core Hadoop Components solutions anddashboards running on Big Data technologies such as Hadoop/Spark. Bachelor degree or equivalent in Computer Science or Information Technology or related fields.
Mandatory: Docker, AWS, Linux, Kubernete or ECS Prior experience provisioning and spinning up AWS Clusters / Kubernetes Production experience to build scalable systems (load balancers, memcached, master/slave architectures) Experience supporting a managed cloud services infrastructure Ability to maintain, monitor and optimise production database servers Prior work with Cloud Monitoring tools (Nagios, Cacti, CloudWatch etc.) Experience with Docker, Kubernetes, Mesos, NoSQL databases (DynamoDB, Cassandra, MongoDB, etc) Other Open Source tools used in the infrastructure space (Packer, Terraform, Vagrant, etc.) In-depth knowledge on Linux Environment. Prior experience leading technical teams through the design and implementation of systems infrastructure projects. Working knowledge of Configuration Management (Chef, Puppet or Ansible preferred) Continuous Integration Tools (Jenkins preferred) Experience in handling large production deployments and infrastructure. DevOps based infrastructure and application deployments experience. Working knowledge of the AWS network architecture including designing VPN solutions between regions and subnets Hands-on knowledge with the AWS AMI architecture including the development of machine templates and blueprints He/she should be able to validate that the environment meets all security and compliance controls. Good working knowledge of AWS services such as Messaging, Application Services, Migration Services, Cost Management Platform. Proven written and verbal communication skills. Understands and can serve as the technical team lead to oversee the build of the Cloud environment based on customer requirements. Previous NOC experience. Client Facing Experience with excellent Customer Communication and Documentation Skills
Qualifications:• Bachelor or Master Degree in Computer Science, Software Engineering from a reputedUniversity.• 5 - 8 Years of experience in building scalable, secure and compliant systems.• More than 2 years of experience in working with GCP deployment for millions of daily visitors• 5+ years hosting experience in a large heavy-traffic environment• 5+ years production application support experience in a high uptime environment• Software development and monitoring knowledge with Automated builds• Technology:o Cloud: AWS or Google Cloudo Source Control: Gitlab or Bitbucket or Githubo Container Concepts: Docker, Microserviceso Continuous Integration: Jenkins, Bambooso Infrastructure Automation: Puppet, Chef or Ansibleo Deployment Automation: Jenkins, VSTS or Octopus Deployo Orchestration: Kubernets, Mesos, Swarmo Automation: Node JS or Pythono Linux environment network administration, DNS, firewall and security management• Ability to be adapt to the startup culture, handle multiple competing priorities, meetdeadlines and troubleshoot problems.
• Works closely with the development team, technical lead, and Solution Architects within theEngineering group to plan ongoing feature development, product maintenance.• Familiar with Virtualization, Containers - Kubernetes, Core Networking, Cloud NativeDevelopment, Platform as a Service – Cloud Foundry, Infrastructure as a Service, DistributedSystems etc• Implementing tools and processes for deployment, monitoring, alerting, automation, scalability,and ensuring maximum availability of server infrastructure• Should be able to manage distributed big data systems such as hadoop, storm, mongoDB,elastic search and cassandra etc.,• Troubleshooting multiple deployment servers, Software installation, Managing licensing etc,.• Plan, coordinate, and implement network security measures in order to protect data, software, andhardware.• Monitor the performance of computer systems and networks, and to coordinate computer networkaccess and use.• Design, configure and test computer hardware, networking software, and operating systemsoftware.• Recommend changes to improve systems and network configurations, and determine hardware orsoftware requirements related to such changes.
Job Responsibilities: Managing the cloud deployment with thousands of VM’s and Containers with 100% uptime Budgeting the infra costs and plan for continued cost optimisation Managing and motivating the team members Designing the architecture to scale the back-end to meet the business requirements Requirements: Strong background in Linux fundamentals and system administration Good command on coding with scripting languages like Python and Shell scripting Experience with Docker and any one of container management systems like Kubernetes, Docker Swarm or Apache mesos Ability to use a wide variety of open source technologies and cloud services like AWS, GCP or Azure Experience with automation/configuration management using either Puppet, Chef or an equivalent Experience with Elasticsearch, Mongodb, Redis, Memcached, Kafka, RabbitMQ or ActiveMQ Knowledge of best practices and IT operations in an always-up, always-available service Good team management skills and communications skills Good experience with monitoring and alerting systems like Nagios, Zabbix Experience with CI and CD tools Educational Qualification: BE/B.Tech/MCA
DevOps Engineer Position: Full time Base Location: Bangalore, with extensive and frequent travel to SEA R&R: ● Work with people across various levels right from delivery team level to top management ● Support internal product and external customers on multiple platforms ● Working with customer teams (specifically development team) to analyse their process, and environments to improve user satisfaction ● Improve client Devops team by enabling them with DevOps concepts, and processes. ● Act as the technical expert across multiple client projects helping them in enhancing their delivery pipeline , and overall DevOps and Agile practices ● Identify state of the art CI/CD tools, prepare decision proposals, implementation plans for these tools and carry out introduction & training, allowing client delivery to move faster. ● Bring in new and cutting-edge methodologies in DevOps both for Greyamp and for clients. Need to have: ● 3 - 5 years of experience of relevant experience (at least 1 year experience in development, and 2+ in Devops) ● Understanding of Linux/Unix Administration ● Understanding of one scripting language (python, shell, ruby or pearl) ● Experience working with different OS Servers (RedHat, Oracle, Microsoft) ● Strong understanding of version control (GIT) ● Good understanding of build tools like ANT or Maven or gradle ● Experience with setting up relevant dashboard using tools around code quality and vulnerability checks (SonarQube) ● Experience implementing and maintaining CI/CD pipelines (Jenkins or Circle CI or GoCD) ● Good understanding and working experience with Docker Containers ● Experience working with configuration management tools(chef, puppet or ansible) ● Extensive knowledge on working with cloud platform and maintaining automation scripts using terraform or similar tools. ( preferably AWS or Azure or GCP ) ● Basic knowledge on working with the cloud product suite ( preferably AWS or Azure or GCP ) ● Understanding and experience working on Saas architecture ● Understanding and experience with VMware or other virtual environments. Nice to have: ● Experience working with Testing automation tools (Preferably Selenium) ● Knowledge and experience with Kubernetes ● Understanding and experience with Monitoring tools like ELK stack, prometheus and grafana, Nagios, or Dynatrace ● Experience working with DataBase deployments ● Client handling experience and stakeholder management
Job Title: DevOps Engineer Work Experience: 3-7 years Qualification: B.E / M. Tech Location: Bangalore, India About Pramata Pramata’s unique, industry-proven offering combines the digitization of critical customer data currently locked in unstructured and obscure sources, then converts that data into high-quality, actionable information accessible through one or multiple applications through the Pramata cloud-based customer digitization platform. Pramata’s customers are some of the largest companies in the world including CenturyLink, Comcast Business, FICO, HPE, Microsoft, NCR, Novelis, and Truven Health IBM. Pramata has helped these companies and more find millions of dollars in revenue, ensure regulatory and pricing compliance, as well as enable risk identification and management across their customer, partner, and even supplier bases. Pramata is headquartered near San Francisco, California and has its Product Engineering and Solutions Delivery Center in Bangalore, India. How Pramata Works Pramata extracts essential intelligence about customer relationships from complex, negotiated contracts, simplifies it from legalese into plain English, synthesizes it with data from CRM, CLM, billing and other systems, and delivers it in the context of a particular user’s role and responsibilities. This is done through Pramata’s unique Digitization-as-a-Service (DaaS) process which transforms unstructured and diverse data into accurate, timely and meaningful digital information stored in the Pramata Digital Intelligence Hub. The Hub keeps the information centralized as one single, shared source of truth along with ensuring that this data remains consistent, accessible and highly secure. The opportunity - What you get to do You will be instrumental in bringing automation to development and testing pipelines, release management, configuration management, environment & application management and day-to-day support of development teams. You will manage the development of capabilities to achieve higher automation, quality and performance in automated build and deployment management, release management, on-demand environment configuration & automation, configuration and change management and in production environment support - Application monitoring, performance management and production support of mission-critical applications including application and system uptime and remote diagnostics - Security - Ensure that the highly sensitive data from our customers is secure at all times. - Instrument applications for performance baselines and to aid rapid diagnostics and resolution in case of system issues. - High availability and disaster recover - Build and maintain systems that are designed to provide 99.9% uptime and ensure that disaster recovery mechanisms are in place. - Automate provisioning and integration tasks as required to deploy new code. - Monitoring - Proactive steps to monitor complex interdependent systems to ensure that issues are being identified and addressed in real-time. Skills required: - Excellent communicator with great interpersonal skills, driving clarity about the intricate systems - Come with hands-on experience in application infrastructure technologies like Linux(RHEL), MySQL, Apache, Nginx, Phusion passenger, Redis etc. - Good understanding of software application builds, configuration management and deployments - Strong scripting skills like Shell, Ruby, Python, Perl etc. Comes with passion for automation - Comfortable with collaboration, open communication and reaching across functional borders. - Advanced problem-solving and task break-down ability. Additional Skills (Good to have but not mandatory): - In depth understanding and experience working with any Cloud Platforms (e.g: AWS, Azure, Google cloud etc) - Experience using configuration management tools like Chef, Puppet, Capistrano, Ansible, etc. - Being able to work under pressure and solve problems using an analytical approach; decisive, fast moving; and a positive attitude. Minimum Qualifications: - Bachelor’s Degree in Computer Science or a related field - Background in technology operations for Linux based applications with 2-4 years of experience in enterprise software - Strong programming skills in Python, Shell or Java - Experience with one or more of the following Configuration Management Tools: Ansible, Chef, Salt, Puppet - Experience with one or more of the following Databases: PostgreSQL, MySQL, Oracle, RDS
About Company:- Tracxn is a Bangalore based product company providing a research and deal sourcing platform for Venture Capital, Private Equity, CorpDevs & professionals working around the startup ecosystem. We are a team of 750+ working professionals serving customers across the globe. Our clients include Funds like Andreessen Horowitz, Sequoia Capital, Accel Partners, NEA; and Large Corporates such as ING, Societe Generale, LG and Royal Bank of Canada. We are backed by prominent investors like Ratan Tata, Nandan Nilekani, and SAIF Partners Founders:- - Neha Singh (ex-Sequoia, BCG | MBA - Stanford GSB) - Abhishek Goyal (ex-Accel Partners, Amazon | BTech - IIT Kanpur) Roles and Responsibilities:- - Design, Develop and Deliver products and frameworks (like Queuing System, Schedulers, etc) that will be used across all the Engineering teams in Tracxn. - Evaluating, Benchmarking and rolling out platform components like API Gateway, Load Balancers etc. - Driving centralized solutions like Service Discovery, Rate limiting etc for teams across Tracxn. - Extend or develop frameworks on top of docker to solve Tracxn's needs for scaling. - Working with Application Development teams to refactor the apps or build new modules to help onboard new architectures. Skills and Experience: - Must have experience in building fault-tolerant and scalable infrastructure. - Must have good conceptual, architectural & design skills. - Must have experience in at least one of the programming languages such as Java, C#, Python, Shell Script, Bash Script. - Must have experience in any one of the following databases like RDBMS, NoSQL databases. - Must have a deep understanding of how the software works at the systems level, familiarity with low-level aspects of performance, multi-threading, performance analysis, and optimization. - Good to have experience in working with Cloud Platforms like AWS, Google Cloud etc. - Good to have experience in Docker, Kubernetes, Ansible, Chef, Puppet. - Good to have experience in frameworks such as Kafka, HAproxy, Nginx. - Team management experience will be an added bonus. What we have to offer? - Work with a performance oriented team driven by ownership. - Learn to design system for high accuracy, efficiency, and scalability. - Focus on delivering quality work rather than deadlines. - Meritocracy driven, candid culture. - Very high visibility regarding startups ecosystem. Above all, you love to build and ship products that Customers will use every day.
About HealthifyMe : We were founded in 2012 by Tushar Vashisht and Sachin Shenoy, and incubated by Microsoft Accelerator. Today, we happen to be India's largest and most loved health & fitness app with over 4 million users from 220+ cities in India. What makes us unique is our ability to bring together the power of artificial intelligence powered technology and human empathy to deliver measurable impact in our customers' lives. We do this through our team of elite nutritionists & trainers working together with the world's first AI-powered virtual nutritionist - "Ria", our proudest creation till date. Ria references data from over 200 million food & workout logs and 14 million conversations to deliver intelligent health & fitness suggestions to our customers. Ria also happens to be multi-lingual, "she" understands English, French, German, Italian & Hindi. Recently Russia's Sistema and Samsung's AI focussed fund - NEXT, led a USD 12 Million Series B funding into our business. We are the most liked app in India across categories, we've been consistently rated as the no:1 health & fitness app on play store for 3 years running and received Google's "editor's choice award" in 2017. Some of the marquee corporates in the country such as Cognizant, Accenture, Deloitte, Metlife amongst others have also benefited from our employee engagement and wellness programs. Our global aspirations have taken us to MENA, SEA and LATAM regions with more markets to follow. Desired Skills & Experience : Requirements : - Background in Linux/Unix Administration - Experience with Automation/Configuration Management using either Puppet, Chef or an equivalent - Ability to use a wide variety of Open Source Tools - Exp with AWS is a must. - Hands-on exp with at least 3 of RDS, EC2, ELB, EBS, S3, SQS, CodeDeploy, CloudWatch - Strong exp in managing SQL and MySQL Databases - Hands on exp in Managing Web Servers Apache, Nginx, Lighttpd, Tomcat (Apache Exp is valued) - A working understanding of Code and Script (PHP, Python, Perl and/or Ruby) - Knowledge of best practices and IT operations in an always-up, always-available service - Good to have exp in Python and NodeJS Stack : - Python/Django, MySQL, NodeJS+MongoDB - ElasticCache, DyanmoDB, SQS, S3 - Deployed in AWS We'd love to see : - Experience in Python and data modeling - Git and distributed revision control experience - High-profile work on commercial or open-source projects - Take ownership in your work, and understand the need for code quality, elegance, and robust infrastructure Look forward to : - Working with a world-class team. - Fun & work at the same place with an amazing work culture and flexible timings. - Get ready to transform yourself into a health junkie Join HealthifyMe and make history!
Site Reliability EngineerJob Description :You will be administering the infrastructure of an indigenous one-of-a-kind artificial intelligence cloud platform. You will be working with the dev teams to deploy, monitor and scale the distributed platform to handle real time AI analysis and loads and loads of visual data (images and videos in various formats). We're looking for people with extensive dev-ops experience and strong system programming skills.Responsibilities :1. You will be responsible for the up time and reliability of infrastructure of SigTuple and help backend teams achieve it by writing reliable software and automation2. Work with other development teams to automate deployment of modules and manage the build and release pipeline.3. Extensive process-level and node-level monitoring and auto healing of entire cluster.4. Managing, provisioning and servicing cloud servers.5. Contribution to back-end services to contribute to its infrastructure system design.Requirements :1. BTech/MTech in any engineering discipline.2. 3-6 years of experience in an Dev-Ops/Software Engineering role.3. Experience in management of cloud computing services. Extensive knowledge of any one cloud platform (Kubernetes, AWS, GCP, Azure etc.)4. Proficiency with any major monitoring framework (Sensu, Nagios etc.).5. Comfortable with any one scripting language (Python, Perl) and a Configuration management or Orchestration Tool (Ansible, Chef etc)6. Proficiency with OS and network fundamentals and strong Linux administrator skills.7. Experience with Container Tools (Docker ecosystem) will be a plus8. Experience of working with issues of scale of a system.9. Experience of working in a startup is a plus.