Tech Lead- Skills Required●Evidence of successful development/engineering team leadership experience.●Experience of communicating and collaborating in globally distributed teams.●Ability to write robust, maintainable code in Python and/or Perl.●Extensive knowledge of Linux, including familiarity with C, UNIX system calls, and low-level O/S and network protocols. Also block, file and object storage protocols●Experience of using a modern configuration management system, such as Salt Stack, Puppet, or Chef.●Effective troubleshooting skills across hardware, O/S, network, and storage.Skills Desired●Enthusiasm for modern dev tools & practices including Git, Jenkins, automated testing, and continuous integration.●Management of external vendor resources
Skills Required●Extensive experience of Linux, including familiarity with C, UNIX system calls, and low-level O/S and network protocols. Also block, file and object storage protocols.●Experience of using a modern configuration management system (examples such as Ansible, Salt Stack, Puppet, or Chef) to automate the management of a large-scale Linux deployment.●Effective troubleshooting skills across hardware, O/S, network, and storage.Skills Desired●Ability to write robust, maintainable code in Python and/or Perl.●Experience working in a large, multi-national enterprise in any industry vertical, showing experience of communicating and collaborating in globally distributed teams.●Enthusiasm for modern development tools and practices including Git, Jenkins, automated testing, and continuous integration.●Experience of designing, implementing and supporting large scale production IaaS platforms.●Knowledge of building and managing Docker containers in a secure manner.
DevOps/Automation Engineer The DevOps Engineer is a member of the Cloud DevOps Team and will work toward the team’s ongoing goals of bringing DevOps culture, processes and tools . Minimum 3+ years in below job role Job Role : Establish a microservices plan with regard to data services. Develop automation scripts for database creation, management and migration. Set up integration environments, and provision databases for new projects. Dynamically extract, transform and load data from one silo to others, employing TTL strategies to reduce the data or enrich some of it. Continuously monitor and optimize performance: Strive for low and consistent tail latency and a joint view of the app and the database. Expert zone: Automate scale out and automate schema changes. Backup and recovery: Backup to cold storage and restore. Plan for downtime and high availability using multiple data center disaster recovery capabilities. Technical Skill Set required: Designing and Writing Chef Cookbooks – MUST HAVE Chef Orchestration and Configuration is must – MUST HAVE Experience in creating Ansible playbooks for infrastructure build/management automation– Highly Desirable Oracle database Administration using Chef and Ansible – Highly Desirable Strong in database concepts of leading rdbms and nosql platforms. Experience in setting up Docker and Containers and troubleshooting HA Orchestration platforms Strong Experience working on Unix based platforms and Writing Scripts using Shell and Python
Why you should be interested in this role? Biofourmis is pioneering an entirely new category of digital health, by developing clinically validated software-based therapeutics to provide a better outcome for patients, smarter engagements and tracking tools for clinicians. By combining Machine Learning Technology we are creating a truly unique movement in the health space. Our team works in a cross-functional agile setup consisting of mobile developers, backend developers, designers, product managers, researchers, and scrum masters.Biofourmis headquartered in Boston, develops and delivers clinically validated software-based therapeutics to provide cost-effective solutions for payers, accelerated research and drug development for biopharmaceutical companies, advanced tools for clinicians to deliver personalized care, and, ultimately, better outcomes for patients.Our robust digital therapeutics products and pipeline cover multiple therapeutic areas including heart failure, acute coronary syndrome, COPD, and chronic pain.A successful Series B (X Æ A-Xii) round, strategic acquisitions, key commercial multi-year contracts, FDA approvals, new U.S. headquarters and industry recognition were among some of our achievements in 2019.Job Description We are looking for full time direct hire candidates with mid-senior level experience. Ideally, we need someone with strong backgrounds in setting up Linux servers, be able to orchestrate CI/CD based deployments, docker containers-based environment, good hands-on AWS, Apache, AWS services. In addition, you will need to be able to communicate the CI/CD workflow to the backend engineers of various team. If you have the skills listed above and are looking for a new opportunity that will change your career and blossom in our industry, we would love the opportunity to speak with you further. More Than 4 Years of experience and knowledge of: Must be highly proficient with Linux servers In depth knowledge of file systems, web servers(nginx/Apache) and managing services. Strong understanding of HTTP(S) and DNS is a must. Must have experience of maintaining a highly scalable server infrastructure. Must be skilled with AWS especially setting up VPC, Cloud formation, Code deploy, DNS configuration, load balancer and auto scaling. Must have experience with deployment and maintenance of DBMS like MySQL, MongoDB and PostgreSQL. Knowledge of Cassandra, Hadoop, or SQL, HBase, InfluxDB is a plus. Must have exposure to automate CI/CD deployment on server, serverless workflow like Jenkins/Chef, Puppet, SAM, Ansible, serverless, docker compose, cloud formation, code deploy, ECS, Kubernetes (optional) etc. Must be skilled with application log processing and analysis of distributed web services using technologies like ELK. Must be able to work with the development team to collaborate continuous integration and deployment activities. Must have experience on server maintenance activities (monitoring, backup, restore) Work with development and product team to develop systems, policies and workflows to release and rollback of application versions to production environment. Basic working XBC145 knowledge of shell scrips / python is a must. Strong communication skills
AWS DevOps EngineerGoodera is looking for an experienced and motivated DevOps professional to be an integral part of its core infrastructure team. As a DevOps Engineer, you must be able to troubleshoot production issues, design, implement, and deploy monitoring tools, collaborate with team members to improve the existing and develop new engineering tools, optimize company's computing architecture, design and conduct security, performance, availability and availability tests.Responsibilities:This is a highly accountable role and the candidate must meet the following professional expectations:• Owning and improving the scalability and reliability of our products.• Working directly with product engineering and infrastructure teams.• Designing and developing various monitoring system tools.• Accountable for developing deployment strategies and build configuration management.• Deploying and updating system and application software.• Ensure regular, effective communication with team members and cross-functional resources.• Maintaining a positive and supportive work culture.• First point of contact for handling customer (may be internal stakeholders) issues, providing guidance and recommendations to increase efficiency and reduce customer incidents.• Develop tooling and processes to drive and improve customer experience, create playbooks.• Eliminate manual tasks via configuration management.• Intelligently migrate services from one AWS region to other AWS regions.• Create, implement and maintain security policies to ensure ISO/ GDPR / SOC / PCI compliance.• Verify infrastructure Automation meets compliance goals and is current with disaster recovery plan.• Evangelize configuration management and automation to other product developers.• Keep himself updated with upcoming technologies to maintain the state of the art infrastructure.Required Candidate profile : • 3+ years of proven experience working in a DevOps environment.• 3+ years of proven experience working in AWS Cloud environments.• Solid understanding of networking and security best practices.• Experience with infrastructure-as-code frameworks such as Ansible, Terraform, Chef, Puppet, CFEngine, etc.• Experience in scripting or programming languages (Bash, Python, PHP, Node.js, Perl, etc.)• Experience designing and building web application environments on AWS, including services such as ECS, ECR, Foregate, Lambda, SNS / SQS, CloudFront, Code Build, Code pipeline, Configuring CloudWatch, WAF, Active Directories, Kubernetes (EKS), EC2, S3, ELB, RDS, Redshift etc.• Hands on Experience in Docker is a big plus.• Experience working in an Agile, fast paced, DevOps environment.• Strong Knowledge in DB such as MongoDB / MySQL / DynamoDB / Redis / Cassandra.• Experience with Open Source and tools such as Haproxy, Apache, Nginx and Nagios etc.• Fluency with version control systems with a preference for Git *• Strong Linux-based infrastructures, Linux administration • Experience with installing and configuring application servers such as WebLogic, JBoss and Tomcat.• Hands-on in logging, monitoring and alerting tools like ELK, Grafana, Metabase, Monit, Zbbix etc.• A team player capable of high performance, flexibility in a dynamic working environment and the ability to lead.d ability to rain others on technical and procedural topics.
Roles and Responsibilities Managing Availability, Performance, Capacity of infrastructure and applications. Building and implementing observability for applications health/performance/capacity. Optimizing On-call rotations and processes. Documenting “tribal” knowledge. Managing Infra-platforms like Mesos/Kubernetes,CICD,Observability (Prometheus/New Relic/ELK),Cloud Platforms (AWS/ Azure),Databases,Data Platforms Infrastructure Providing help in onboarding new services with production readiness review process. Providing reports on services SLO/Error Budgets/Alerts and Operational Overhead. Working with Dev and Product teams to define SLO/Error Budgets/Alerts. Working with Dev team to have in depth understanding of the application architecture and its bottlenecks. Identifying observability gaps in product services, infrastructure and working with stake owners to fix it. Managing Outages and doing detailed RCA with developers and identifying ways to avoid that situation. Managing/Automating upgrades of the infrastructure services. Automate toil work. Experience & Skills 6+ years of total experience Experience as an SRE/DevOps/Infrastructure Engineer on large scale microservices and infrastructure. A collaborative spirit with the ability to work across disciplines to influence, learn, and deliver. A deep understanding of computer science, software development, and networking principles. Demonstrated experience with languages, such as Python, Java, Golang etc. Extensive experience with Linux administration and good understanding the various linux kernel subsystems (memory, storage, network etc). Extensive experience in DNS, TCP/IP, UDP, GRPC, Routing and Load Balancing. Expertise in GitOps, Infrastructure as a Code tools such as Terraform etc.. and Configuration Management Tools such as Chef, Puppet, Saltstack, Ansible. Expertise of Amazon Web Services (AWS) and/or other relevant Cloud Infrastructure solutions like Microsoft Azure or Google Cloud. Experience in building CI/CD solutions with tools such as Jenkins, GitLab, Spinnaker, Argo etc. Experience in managing and deploying containerized environments using Docker, Mesos/Kubernetes is a plus.
Requirements :- Experience with various Cloud development methodologies and virtual environments. 2-4 years experience with Container Orchestration ( Docker, Kubernetes) 2-4 years experience with Configuration Management Tooling (Chef, Puppet, Ansible, Jenkins) Strong foundation in programming, algorithms, software application design and data base knowledge. Experience in building dashboards and aggregating metrics. Responsibilities :- Maintain, optimize and grow our production, staging and test infrastructure hosted in VMinfra and Openstack. Develop automation scripts for manage the infra security and performance monitoring of application using Ansible , Chef, Puppet, Nagios, Splunk. Server and Application patch management for Unix/ Linux and widows based production servers. Strong working experience on Linux , windows operating systems and good understanding on software based storage platform Ceph with Openstack cloud. Hands-on programming experience on Python, Ruby, Powershell, Shell scripting, and Perl scripting, Python scripting is must with expert level. Required self-motivated and critical thinking to build our own tools to reduce occurrences of human errors and improve overall customer experience. Good oral and written communication skills are required for Involve in root cause analysis of production errors. Investigate high level production issues to help development and support team to resolve technical issues on time. Enhancing automation infrastructure to integrate test automation into the CI/CD cycle using different environments Interface with various R&D groups (Architects, Developers, Product Managers)
About Us: 100ms is building a Platform-as-a-Service for developers integrating video-conferencing experiences into their apps. Our SDKs enable developers to add gold standard audio-video quality conferencing with much faster shipping times. We are a team uniquely placed to work on this problem. We have built world-record scale live video infrastructure powering billions of live video minutes in a day. We are a remote-first global team with engineers who've built video teams at Facebook and Hotstar. As part of the infrastructure team, you will be mainly responsible for looking after the cloud infrastructure. You Will Be: Building and setting up new development tools and infrastructure Understanding the needs of stakeholders and conveying this to developers Driving centralized solutions like logging, rate limiting, service discovery Working on ways to automate and improve development and release processes Ensuring that systems are safe and secure against cybersecurity threats You Have: Bachelor's degree or equivalent practical experience 4 years of professional software development experience, or 2 years with an advanced degree Expertise in managing large scale Cloud infrastructure, preferable AWS and Kubernetes Experience in developing applications using programming languages like Python, Golang and Ruby Hands on experience with prometheus, grafana, fluentd, splunk etc. Good To Have: Knowledge of Terraform, Chef, Helm etc., Ability to take on complex and ambiguous problems Strong inclination to keep up-to-date with latest trends, learn new concepts, or contribute to open-source projects and would be eager to talk about ideas in internal or external forum You Will Gain: You'll be part of a small team at a fast-growing engineering-first startup You'll work with engineers across the globe with experience at Facebook and Hotstar You can grow as an individual contributor or as a team leader - freedom to set your own goals You'll work on problems at the cutting-edge of real-time video communication technology at massive scale
As DevOps Engineer, you'll be part of the team building the stage for our Software Engineers to work on, helping to enhance our product performance and reliability. Responsibilities: Build & operate infrastructure to support website, backed cluster, ML projects in the organization. Helping teams become more autonomous and allowing the Operation team to focus on improving the infrastructure and optimizing processes. Delivering system management tooling to the engineering teams. Working on your own applications which will be used internally. Contributing to open source projects that we are using (or that we may start). Be an advocate for engineering best practices in and out of the company. Organizing tech talks and participating in meetups and representing Box8 at industry events. Sharing pager duty for the rare instances of something serious happening. Collaborate with other developers to understand & setup tooling needed for Continuous Integration/Delivery/Deployment (CI/CD) practices. Requirements: 1+ Years Of Industry Experience Scale existing back end systems to handle ever increasing amounts of traffic and new product requirements. Ruby On Rails or Python and Bash/Shell skills. Experience managing complex systems at scale. Experience with Docker, rkt or similar container engine. Experience with Kubernetes or similar clustering solutions. Experience with tools such as Ansible or Chef Understanding of the importance of smart metrics and alerting. Hands on experience with cloud infrastructure provisioning, deployment, monitoring (we are on AWS and use ECS, ELB, EC2, Elasticache, Elasticsearch, S3, CloudWatch). Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Knowledge of data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Experience in working on linux based servers. Managing large scale production grade infrastructure on AWS Cloud. Good Knowledge on scripting languages like ruby, python or bash. Experience in creating in deployment pipeline from scratch. Expertise in any of the CI tools, preferably Jenkins. Good knowledge of docker containers and its usage. Using Infra/App Monitoring tools like, CloudWatch/Newrelic/Sensu. Good to have: Knowledge of Ruby on Rails based applications and its deployment methodologies. Experience working on Container Orchestration tools like Kubernetes/ECS/Mesos. Extra Points For Experience With Front-end development NewRelic GCP Kafka, Elasticsearch.
Skills Requirements Knowledge of Hadoop ecosystem installation, initial-configuration and performance tuning. Expert with Apache Ambari, Spark, Unix Shell scripting, Kubernetes and Docker Knowledge on python would be desirable. Experience with HDP Manager/clients and various dashboards. Understanding on Hadoop Security (Kerberos, Ranger and Knox) and encryption and Data masking. Experience with automation/configuration management using Chef, Ansible or an equivalent. Strong experience with any Linux distribution. Basic understanding of network technologies, CPU, memory and storage. Database administration a plus.Qualifications and Education Requirements 2 to 4 years of experience with and detailed knowledge of Core Hadoop Components solutions anddashboards running on Big Data technologies such as Hadoop/Spark. Bachelor degree or equivalent in Computer Science or Information Technology or related fields.
What You’ll Do• As a member of our cross-functional squad, you will own the entire infrastructure.• You will design, document and implement systems.• You will write and review codes under the scope of your squad.• Take ownership and troubleshoot sophisticated systems under pressure.• Partake in on-call rotation with DevOps and backend engineers.• You will be aware of the systems at all times.Required Skills• 3+ Yrs Commanding knowledge in Systems & Networking [Linux]• Architect level knowledge on Cloud computing platform [Preferable AWS]• Excellent Architectural/implementation level knowledge on any Container Orchestration tools[We use EKS]• Good Administration/Tuning knowledge on SQL/NoSQL [RDS/ES/Kafka/Mongo/Scylla]• One of the Must required skill is very strong coding and scripting skills on any of thefollowing languages - Go- Python- Bash - Java• You are comfortable with large scale production systems and technologies, for example, loadbalancing, monitoring/logging, distributed systems, and configuration management[Terraform/Vault/Consu]• Good understanding & experience with software engineering best practices like Automationand CI-CD• You believe in solving problems with open source tools and technology.• Always ready to learn more and adopt new cutting edge technology with the right valueproposition. • Good to have1. Ansible2. Building tools/dashboards to monitor infrastructure.3. EKS4. Helm chart5. Jenkins6. Load balancers7. Open-source contributionsWhat can you expect● Work with a team who believes in STUDENT FIRST, you will have an opportunity to build thefirst (BE ORIGINAL) Open Social Learning Platform that can impact millions of students acrossthe globe● A working environment where you can look at things differently and challenge and offersolutions; which also offers you a freedom to commit ‘n’ number of first time mistakes - NEVERDEPRIVE A LEARNER● Complete ownership and responsibility for the success of your autonomous team across allscope of works - OWN IT and BE BOLD● An opportunity to lead bunch of Young, Smart, Driven and Dynamic engineers who arecommitted to go BEYOND SELF to solve business challenges● Work closely with the leadership team who live by the value of BE BETTER EVERYDAY to helpgrowing an amazing organisation
Mandatory: Docker, AWS, Linux, Kubernete or ECS Prior experience provisioning and spinning up AWS Clusters / Kubernetes Production experience to build scalable systems (load balancers, memcached, master/slave architectures) Experience supporting a managed cloud services infrastructure Ability to maintain, monitor and optimise production database servers Prior work with Cloud Monitoring tools (Nagios, Cacti, CloudWatch etc.) Experience with Docker, Kubernetes, Mesos, NoSQL databases (DynamoDB, Cassandra, MongoDB, etc) Other Open Source tools used in the infrastructure space (Packer, Terraform, Vagrant, etc.) In-depth knowledge on Linux Environment. Prior experience leading technical teams through the design and implementation of systems infrastructure projects. Working knowledge of Configuration Management (Chef, Puppet or Ansible preferred) Continuous Integration Tools (Jenkins preferred) Experience in handling large production deployments and infrastructure. DevOps based infrastructure and application deployments experience. Working knowledge of the AWS network architecture including designing VPN solutions between regions and subnets Hands-on knowledge with the AWS AMI architecture including the development of machine templates and blueprints He/she should be able to validate that the environment meets all security and compliance controls. Good working knowledge of AWS services such as Messaging, Application Services, Migration Services, Cost Management Platform. Proven written and verbal communication skills. Understands and can serve as the technical team lead to oversee the build of the Cloud environment based on customer requirements. Previous NOC experience. Client Facing Experience with excellent Customer Communication and Documentation Skills
Qualifications:• Bachelor or Master Degree in Computer Science, Software Engineering from a reputedUniversity.• 5 - 8 Years of experience in building scalable, secure and compliant systems.• More than 2 years of experience in working with GCP deployment for millions of daily visitors• 5+ years hosting experience in a large heavy-traffic environment• 5+ years production application support experience in a high uptime environment• Software development and monitoring knowledge with Automated builds• Technology:o Cloud: AWS or Google Cloudo Source Control: Gitlab or Bitbucket or Githubo Container Concepts: Docker, Microserviceso Continuous Integration: Jenkins, Bambooso Infrastructure Automation: Puppet, Chef or Ansibleo Deployment Automation: Jenkins, VSTS or Octopus Deployo Orchestration: Kubernets, Mesos, Swarmo Automation: Node JS or Pythono Linux environment network administration, DNS, firewall and security management• Ability to be adapt to the startup culture, handle multiple competing priorities, meetdeadlines and troubleshoot problems.
• Works closely with the development team, technical lead, and Solution Architects within theEngineering group to plan ongoing feature development, product maintenance.• Familiar with Virtualization, Containers - Kubernetes, Core Networking, Cloud NativeDevelopment, Platform as a Service – Cloud Foundry, Infrastructure as a Service, DistributedSystems etc• Implementing tools and processes for deployment, monitoring, alerting, automation, scalability,and ensuring maximum availability of server infrastructure• Should be able to manage distributed big data systems such as hadoop, storm, mongoDB,elastic search and cassandra etc.,• Troubleshooting multiple deployment servers, Software installation, Managing licensing etc,.• Plan, coordinate, and implement network security measures in order to protect data, software, andhardware.• Monitor the performance of computer systems and networks, and to coordinate computer networkaccess and use.• Design, configure and test computer hardware, networking software, and operating systemsoftware.• Recommend changes to improve systems and network configurations, and determine hardware orsoftware requirements related to such changes.