Mandatory: Docker, AWS, Linux, Kubernete or ECS Prior experience provisioning and spinning up AWS Clusters / Kubernetes Production experience to build scalable systems (load balancers, memcached, master/slave architectures) Experience supporting a managed cloud services infrastructure Ability to maintain, monitor and optimise production database servers Prior work with Cloud Monitoring tools (Nagios, Cacti, CloudWatch etc.) Experience with Docker, Kubernetes, Mesos, NoSQL databases (DynamoDB, Cassandra, MongoDB, etc) Other Open Source tools used in the infrastructure space (Packer, Terraform, Vagrant, etc.) In-depth knowledge on Linux Environment. Prior experience leading technical teams through the design and implementation of systems infrastructure projects. Working knowledge of Configuration Management (Chef, Puppet or Ansible preferred) Continuous Integration Tools (Jenkins preferred) Experience in handling large production deployments and infrastructure. DevOps based infrastructure and application deployments experience. Working knowledge of the AWS network architecture including designing VPN solutions between regions and subnets Hands-on knowledge with the AWS AMI architecture including the development of machine templates and blueprints He/she should be able to validate that the environment meets all security and compliance controls. Good working knowledge of AWS services such as Messaging, Application Services, Migration Services, Cost Management Platform. Proven written and verbal communication skills. Understands and can serve as the technical team lead to oversee the build of the Cloud environment based on customer requirements. Previous NOC experience. Client Facing Experience with excellent Customer Communication and Documentation Skills
Description We are developing the world’s first enterprise-level Platform-as-a-Service (PaaS) for robots, creating a rare opportunity for an experienced, product-focused engineering professional. The PaaS aims to aid and offer innovative features to handle every part of the product lifecycle required to support and deliver consumer-facing connected machines and services. Site Reliability Engineering combines skills of software and systems engineering. Your key responsibility is to focus on optimizing existing systems, building infrastructure, and eliminating work through automation to make them more reliable and ensure the highest possible uptime for a cloud-based robotics system. Your responsibilities will include the following but not limited to: Support services before they go live through activities such as system design consulting, capacity planning, and launch reviews Maintain services once they are live by measuring and monitoring availability, latency, and overall system health Engage in and improve the whole lifecycle of services—from inception and design, through deployment, operation, and refinement Scale systems sustainably through mechanisms like automation, and evolve systems by pushing for changes that improve reliability and velocity Practice sustainable incident response and postmortems Build and evolve the operations handbook Requirements Bachelor’s degree in Computer Science or a similar technical field of study, or equivalent practical experience with an outstanding track record At least 2 years of experience in product development and/or supporting operations Mastery of one or more of the following programming languages including but not limited to Python, Golang, Ruby, Bash Familiar with Config Management, Docker, IaaS, PaaS, Continuous Delivery, Continuous Integration, DevOps. Solid understanding of network fundamentals and practical experience troubleshooting networked services Demonstrated proficiency with: Linux systems, public cloud platforms, and associated tools/technologies Fluency in English Preferred Qualifications Extremely organised, detail oriented and thorough in every undertaking Ability to balance multiple tasks and projects effectively and quickly adapt to new variables Experience in designing, analysing and troubleshooting distributed systems Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive Ability to debug and optimize code and automate routine tasks Benefits Competitive salary Stock options International working environment Bleeding edge technology Working with exceptionally talented engineers
YOE: 1- 3yearsSkill: Python, Docker or Ansible , AWS➢ Experience Building a multi-region highly available auto-scaling infrastructure that optimizesperformance and cost. plan for future infrastructure as well as Maintain & optimize existinginfrastructure.➢ Conceptualize, architect and build automated deployment pipelines in a CI/CD environment likeJenkins.➢ Conceptualize, architect and build a containerized infrastructure using Docker,Mesosphere orsimilar SaaS platforms.Work with developers to institute systems, policies and workflows which allow for rollback ofdeployments Triage release of applications to production environment on a daily basis.➢ Interface with developers and triage SQL queries that need to be executed inproductionenvironments.➢ Maintain 24/7 on-call rotation to respond and support troubleshooting of issues in production.➢ Assist the developers and on calls for other teams with post mortem, follow up and review ofissues affecting production availability.➢ Establishing and enforcing systems monitoring tools and standards➢ Establishing and enforcing Risk Assessment policies and standards➢ Establishing and enforcing Escalation policies and standards
People Ready to attend at least one round of F2F interview only should apply.Job Description: - Linux: 4 or more years in Unix systems engineering with experience in Red Hat Linux, Centos or Ubuntu. - AWS: Working experience and a good understanding of the AWS environment, including VPC, EC2, EBS, S3, RDS, SQS, Cloud Formation, Lambda, and Redshift. - DevOps: Experience with DevOps automation - Orchestration/Configuration Management and CI/CD tools (Ansible, Chef, Puppet, Jenkins, etc.). Puppet and Ansible experience a strong plus. - Programming: Experience programming with Python, Bash, REST APIs, and JSON encoding. - Version Control: Nice to have Git experience. - Testing: Be very familiar with CI / CD, good in scripting like Phyton, Unix, etc - Security: Experience implementing role-based security, including AD integration, security policies, and auditing in a Linux/Hadoop/AWS environment. - Monitoring: Hands-on experience with monitoring tools such as AWS CloudWatch, Nagios or Splunk. - Backup/Recovery: Experience with the design and implementation of big data backup/recovery solutions. - Networking: Working knowledge of TCP/IP networking, SMTP, HTTP, load-balancers (ELB, HAProxy) and high availability architecture. - Ability to keep systems running at peak performance, upgrade the operating system, patches, and version upgrades as required - Implementation of Auto Scaling for instances under ELB using ELB Health Checks. - Work experience on S3 bucket. - IAM and its policy management to restrict users to particular AWS Resources. - Strong ability to troubleshoot any issues generated while building, deploying and in production support - Experience in Performance tuning, garbage management, and memory leaks, networking and information security, IO monitoring and analysis. - Experience in resolving escalated tickets, performance problems and creating Root Cause Analysis (RCA) document on Severity 1 issues. - Achieved disaster recovery implementations and participated in a 24/7 on-call rotation policy with another team member.
DevOps Engineer Skill Set – Must have AWS: 2+ years’ experience with using a broad range of AWS technologies (e.g. EC2, RDS, ELB, S3, VPC, IAM, CloudWatch) to develop and maintain AWS based cloud solution, with emphasis on best practice cloud security. DevOps: Solid experience as a DevOps Engineer in a 24x7 uptime AWS environment, including automation experience with configuration management tools. Strong scripting skills and automation skills. Expertise in Linux system administration. Beneficial to have Basic DB administration experience (MySQL) Working knowledge of some of the major open source web containers & servers like apache, tomcat and Nginx. Understanding network topologies and common network protocols and services (DNS, HTTP(S), SSH, FTP, SMTP). Experience on Docker, Ansible & Python. Role & Responsibilities: Deploying, automating, maintaining and managing AWS cloud-based production system, to ensure the availability, performance, scalability and security of productions systems. Build, release and configuration management of production systems. System troubleshooting and problem solving across platform and application domains. Ensuring critical system security using best in class cloud security solutions.
Requirements: • Hands on programming skills on developing automation modules on Python/Go/Bash • Require hands on experience with both private and public cloud infrastructure and interfacing programmatically through APIs. • Relevant DevOps/Operations/Development experience working under Agile DevOps culture on large scale distributed systems in production infrastructure. • Successful track record of providing production support for large-scale distributed systems, with experience in Docker Swarm and Kubernetes. • Hands on experience on continuous integration and build tools like Jenkins along with version control system like Git/SVN and ticketing system like Jira/Bugzilla. • Experience on setting up and managing DevOps tools on Repository, Monitoring, and Log Analysis etc. • Solid understanding of Applications, Networking and Open source tools with working knowledge on clusters and service discovery systems. • Working knowledge in Security and Vulnerability detections. • Should have done production migration and update/upgrades with minimum or zero down time. • Good knowledge on Security, Audits and compliance requirements
1. Be the first one to experiment on new age cloud offerings, help define the bestpractise as a thought leader for cloud, automation & Dev-Ops, be a solutionvisionary and technology expert across multiple channels.2. Continually augment skills and learn new tech as the technology and client needsevolve3. Use your experience in Google cloud platform, AWS or Microsoft Azure to buildhybrid-cloud solutions for customers.4. Provide leadership to project teams, and facilitates the definition of projectdeliverables around core Cloud based technology and methods.5. Define tracking mechanisms and ensure IT standards and methodology are met;deliver quality results.6. Participate in technical reviews of requirements, designs, code and other artifacts7. Identify and keep abreast of new technical concepts in google cloud platform8. Security, Risk and Compliance - Advise customers on best practices around accessmanagement, network setup, regulatory compliance and related areas.
Reflektive’s infrastructure team is responsible for building and supporting the services that run Reflektive. The team also assists in scaling the engineering organization by participating in architectural design reviews, establishing best practices and procedures, and debugging problems across the whole stack. Reflektive has major infrastructure initiatives to tackle this year. Our tech stack, workflow, and team continues to adapt along with our growing customer base and internal requirements. Initiatives this year may include prototyping new service architecture on linux containers, heavily investing in automation, and implementing strong security practices. You’ll join a team where everyone, including you, is active in defining what we ship. You’ll work with product and engineering teams to understand our needs and learn how your skills can best make a difference. Our products help other companies grow their employees to their full potential. Likewise, we also value personal growth within our company. One aspect of this is exemplified in the learning budget we have in Engineering. Every engineer gets an annual education budget to spend towards books, resources, or attending conferences to live our core value of #AlwaysLearning. Responsibilities Heavily influence the future of Reflektive’s technical architecture Be hands-on in scaling and securing the company’s infrastructure Collaborate across engineering and product teams to define requirements Communicate trade-offs of technical design Must have Experience with scaling infrastructure for a growing engineering team Strong experience in system and application design Experience with building and deploying microservices in development and production Strong hands on experience with provisioning/orchestration: Chef, Puppet, Terraform, Kubernetes Strong knowledge and experience with linux containers (Docker, Vagrant) Strong unix fundamentals Strong network fundamentals Senior level experience in git, shell scripting Strong knowledge and experience with AWS: EC2, S3, RDS, Route53 Strong experience in release and deploy automation CI/CD Continuous Integration / Deployment Experience Bachelor’s Degree in Computer Science, Engineering, or related field 7+ years of industry experience Bonus Points Languages: Ruby Database administration Experience with scaling a monolith Experience with Elasticsearch, Kinesis, Storm Experience working in an Agile and Scrum environment Web-based/SaaS company background Startup experience About Reflektive Forward-thinking organizations use Reflektive’s people management platform to drive employee performance and development with Real-Time Feedback, Recognition, Check-Ins, Goal Management, Performance Reviews, 1-on-1 Profiles, and Analytics. Reflektive’s more than 500 customers include Blue Origin, Comcast, Instacart, Dollar Shave Club, Healthgrades, Wavemaker Global, and Protective Life. Backed by Andreessen Horowitz, Lightspeed Venture Partners, and TPG Growth, Reflektive has raised more than $100 million to date, and was ranked the 13th Fastest Growing Company in North America on Deloitte’s 2018 Technology Fast 500™.
FalconX is a fast-growing institutional smart digital asset broker. Delivering best-in-class pricing and execution capabilities for institutional traders and investors. FalconX's vision is to grow the digital asset ecosystem 1000X with industry-leading prime brokerage solutions. an All-star team from leading institutions like Google, LinkedIn, JUMP, Citadel, Goldman Sachs, Harvard Business School, Carnegie Mellon, IITs.At FalconX, you’ll help make crypto-assets a successful asset class. Some of the largest financial institutions from Wall Street to Silicon Valley and around the world will turn to you for the best market prices and insights on the crypto industry. You will become an expert on crypto industry trends and market movements. With your experience, you will shape our company’s trading strategies according to our core values - speed, transparency and trust.
Senior DevOps EngineerSymphony Talent is a leader in the employer branding, career site hosting, and talentacquisition tools space. We help employers connect with talent, engage candidates throughtechnology, and manage the recruiting and onboarding of candidates throughout the entiretalent acquisition lifecycle.Symphony Talent hosts career sites and SaaS based recruiting tools for a large number ofFortune 500 companies and well known global name brands.The services we deliver are diverse, complex, and interesting. Delivery of these servicesrequires a cloud-first mentality with skills in internet-facing web application management,security, performance management, infrastructure automation, and incorporation of leadingedge technologies. Geeks are welcome here. Our applications handle tens of millions ofpageviews per month, incorporate automation and auto-scaling, and span multiple redundantcloud regions and availability zones designed for high availability. Our services are built on asignificant AWS footprint, utilizing nearly every AWS service in some form or fashion. We arecloud leaders in the midwest market.We are looking for a Senior DevOps Engineer to join our team to help manage our suite of cloudbased recruiting tools and career website hosting platforms. You could help us architect andmanage a large and growing cloud based infrastructure footprint in AWS with emphasis oninfrastructure automation, auto-scaling, and proactive self-healing.This position is part of a highly collaborative team including other DevOps engineers,developers, weird security hacker types, and data analytics nerds. Opportunities will exist toengage various development teams, learn new technologies, and mentor junior team members.Why Symphony Talent is better than your current job:● Work in a cloud-first organization that sells cloud based solutions and leverages theminternally● Work with many modern automation technologies● Influence IT strategy, tools, and budget● Participate in open-source projects (if desired)● Bright, fun, and casual office environment● Work in an organization that puts people first and cares about your personaldevelopment, interests, and unique skillsetWhat it takes to get this job:● Hands on experience with AWS – Provisioning VPC’s, EC2, S3, CloudFormation,CloudFront, RDS, Redshift, CloudWatch, SQS, etc. (Emphasis on automatedinstantiation of these services where possible - not just manual creation)● Experience with Ansible or similar configuration management tool (Chef, Puppet, SaltStack)● Experience with infrastructure automation tooling (AWS CloudFormation, Terraform, etc)● CI-CD & Deployment Automation Experience● Demonstrated experience automating cloud infrastructure & building redundant and self-healing systems (Auto-scaling, Immutable Infrastructure, On-Demand Workloads, etc.)● Experience working in an integrated environment with Development teams and comfortinteracting and providing guidance to Dev staff regarding infrastructure best practices,application development alignment with immutable infrastructure and cloud serviceprinciples, and issue remediation.● Thought leadership to understand, analyze, suggest & drive innovative solutions to solveproblems on the ground.● Linux● Windows● Motivation to learn new things quickly● Willingness to admit if you don’t know something and ask for help (we don’t knoweverything either)It would be cool if you also had:● A positive personality that will make others enjoy working with you● Experience using system monitoring tools● Experience automating security into deployment streams and infrastructure● Some development skills (understanding the Developers helps)
KaiOS is a mobile operating system for smart feature phones that stormed the scene to become the 3rd largest mobile OS globally (2nd in India ahead of iOS). We are on 100M+ devices in 100+ countries. We recently closed a Series B round with Cathay Innovation, Google and TCL. Key skills: Should have knowledge about Cloud, Installation and Maintenance of NOSql database, standard HTTP technologies. 4 to 7 Years of experience in Linux Administration and Shell/Perl/python/ruby scripting Cloud Knowledge (AWS, Docker) VMWare, VxSphere DataBase (MySQL, NOSQL, Cassandra, HBase) Standard HTTP Technologies..(Apache, NGInx, HAProxy) Mandatory Skills: Cassandra or NGInx or HAProxy Responsibilities: Monitor backend servers • Installation of the released solutions to AWS or Cloud • Log analysis • Fixing the issues in cloud environment Requirements Designation: DevOps Engineer Location: Bangalore Experience: 4 to 7 Years Notice period: 30 days or Less
What the future holds:You will work up and down our entire technology stack.You work with engineers to build beautiful and intuitive interfaces, and connect them to complex backend systems. You will be a crucial member of our engineering team and help design, build, and maintain systems necessary to continue our rapid growth. You will be building systems for teachers, students and parents around the world to interact in ways that were never possible before.Your Responsibilities: Problem solver, scrappy, ingenious, and relentless in solving extremely complex problems. Customer focused and passionate about creating web applications that interests millions of users and building delightful user experiences. Work with developers to design algorithms and flowcharts Produce clean, efficient code based on specifications Verify and deploy programs and systems Troubleshoot, debug and upgrade existing software Job Requirements: 1+ years of engineering experience with a proven track record of building high performance consumer web applications or services. Past experience using Node.js, Ruby, Java or Python Experience with design or development with REST based API frameworks. Familiarity with why API frameworks are needed. Hands on knowledge of RDMS like MySQL, Postgres, Oracle. Strong algorithms and data structures background Exposure to front end UI frameworks like AngularJS, Bootstrap or similar. Good to have experience building realtime applications with thousands of users interacting simultaneously. Good to have knowledge about any of AWS / Elasticsearch / Redis