Skills Requirements Knowledge of Hadoop ecosystem installation, initial-configuration and performance tuning. Expert with Apache Ambari, Spark, Unix Shell scripting, Kubernetes and Docker Knowledge on python would be desirable. Experience with HDP Manager/clients and various dashboards. Understanding on Hadoop Security (Kerberos, Ranger and Knox) and encryption and Data masking. Experience with automation/configuration management using Chef, Ansible or an equivalent. Strong experience with any Linux distribution. Basic understanding of network technologies, CPU, memory and storage. Database administration a plus.Qualifications and Education Requirements 2 to 4 years of experience with and detailed knowledge of Core Hadoop Components solutions anddashboards running on Big Data technologies such as Hadoop/Spark. Bachelor degree or equivalent in Computer Science or Information Technology or related fields.
What You’ll Do• As a member of our cross-functional squad, you will own the entire infrastructure.• You will design, document and implement systems.• You will write and review codes under the scope of your squad.• Take ownership and troubleshoot sophisticated systems under pressure.• Partake in on-call rotation with DevOps and backend engineers.• You will be aware of the systems at all times.Required Skills• 3+ Yrs Commanding knowledge in Systems & Networking [Linux]• Architect level knowledge on Cloud computing platform [Preferable AWS]• Excellent Architectural/implementation level knowledge on any Container Orchestration tools[We use EKS]• Good Administration/Tuning knowledge on SQL/NoSQL [RDS/ES/Kafka/Mongo/Scylla]• One of the Must required skill is very strong coding and scripting skills on any of thefollowing languages - Go- Python- Bash - Java• You are comfortable with large scale production systems and technologies, for example, loadbalancing, monitoring/logging, distributed systems, and configuration management[Terraform/Vault/Consu]• Good understanding & experience with software engineering best practices like Automationand CI-CD• You believe in solving problems with open source tools and technology.• Always ready to learn more and adopt new cutting edge technology with the right valueproposition. • Good to have1. Ansible2. Building tools/dashboards to monitor infrastructure.3. EKS4. Helm chart5. Jenkins6. Load balancers7. Open-source contributionsWhat can you expect● Work with a team who believes in STUDENT FIRST, you will have an opportunity to build thefirst (BE ORIGINAL) Open Social Learning Platform that can impact millions of students acrossthe globe● A working environment where you can look at things differently and challenge and offersolutions; which also offers you a freedom to commit ‘n’ number of first time mistakes - NEVERDEPRIVE A LEARNER● Complete ownership and responsibility for the success of your autonomous team across allscope of works - OWN IT and BE BOLD● An opportunity to lead bunch of Young, Smart, Driven and Dynamic engineers who arecommitted to go BEYOND SELF to solve business challenges● Work closely with the leadership team who live by the value of BE BETTER EVERYDAY to helpgrowing an amazing organisation
Mandatory: Docker, AWS, Linux, Kubernete or ECS Prior experience provisioning and spinning up AWS Clusters / Kubernetes Production experience to build scalable systems (load balancers, memcached, master/slave architectures) Experience supporting a managed cloud services infrastructure Ability to maintain, monitor and optimise production database servers Prior work with Cloud Monitoring tools (Nagios, Cacti, CloudWatch etc.) Experience with Docker, Kubernetes, Mesos, NoSQL databases (DynamoDB, Cassandra, MongoDB, etc) Other Open Source tools used in the infrastructure space (Packer, Terraform, Vagrant, etc.) In-depth knowledge on Linux Environment. Prior experience leading technical teams through the design and implementation of systems infrastructure projects. Working knowledge of Configuration Management (Chef, Puppet or Ansible preferred) Continuous Integration Tools (Jenkins preferred) Experience in handling large production deployments and infrastructure. DevOps based infrastructure and application deployments experience. Working knowledge of the AWS network architecture including designing VPN solutions between regions and subnets Hands-on knowledge with the AWS AMI architecture including the development of machine templates and blueprints He/she should be able to validate that the environment meets all security and compliance controls. Good working knowledge of AWS services such as Messaging, Application Services, Migration Services, Cost Management Platform. Proven written and verbal communication skills. Understands and can serve as the technical team lead to oversee the build of the Cloud environment based on customer requirements. Previous NOC experience. Client Facing Experience with excellent Customer Communication and Documentation Skills
Reflektive’s infrastructure team is responsible for building and supporting the services that run Reflektive. The team also assists in scaling the engineering organization by participating in architectural design reviews, establishing best practices and procedures, and debugging problems across the whole stack. Reflektive has major infrastructure initiatives to tackle this year. Our tech stack, workflow, and team continues to adapt along with our growing customer base and internal requirements. Initiatives this year may include prototyping new service architecture on linux containers, heavily investing in automation, and implementing strong security practices. You’ll join a team where everyone, including you, is active in defining what we ship. You’ll work with product and engineering teams to understand our needs and learn how your skills can best make a difference. Our products help other companies grow their employees to their full potential. Likewise, we also value personal growth within our company. One aspect of this is exemplified in the learning budget we have in Engineering. Every engineer gets an annual education budget to spend towards books, resources, or attending conferences to live our core value of #AlwaysLearning. Responsibilities Heavily influence the future of Reflektive’s technical architecture Be hands-on in scaling and securing the company’s infrastructure Collaborate across engineering and product teams to define requirements Communicate trade-offs of technical design Must have Experience with scaling infrastructure for a growing engineering team Strong experience in system and application design Experience with building and deploying microservices in development and production Strong hands on experience with provisioning/orchestration: Chef, Puppet, Terraform, Kubernetes Strong knowledge and experience with linux containers (Docker, Vagrant) Strong unix fundamentals Strong network fundamentals Senior level experience in git, shell scripting Strong knowledge and experience with AWS: EC2, S3, RDS, Route53 Strong experience in release and deploy automation CI/CD Continuous Integration / Deployment Experience Bachelor’s Degree in Computer Science, Engineering, or related field 7+ years of industry experience Bonus Points Languages: Ruby Database administration Experience with scaling a monolith Experience with Elasticsearch, Kinesis, Storm Experience working in an Agile and Scrum environment Web-based/SaaS company background Startup experience About Reflektive Forward-thinking organizations use Reflektive’s people management platform to drive employee performance and development with Real-Time Feedback, Recognition, Check-Ins, Goal Management, Performance Reviews, 1-on-1 Profiles, and Analytics. Reflektive’s more than 500 customers include Blue Origin, Comcast, Instacart, Dollar Shave Club, Healthgrades, Wavemaker Global, and Protective Life. Backed by Andreessen Horowitz, Lightspeed Venture Partners, and TPG Growth, Reflektive has raised more than $100 million to date, and was ranked the 13th Fastest Growing Company in North America on Deloitte’s 2018 Technology Fast 500™.
Qualifications:• Bachelor or Master Degree in Computer Science, Software Engineering from a reputedUniversity.• 5 - 8 Years of experience in building scalable, secure and compliant systems.• More than 2 years of experience in working with GCP deployment for millions of daily visitors• 5+ years hosting experience in a large heavy-traffic environment• 5+ years production application support experience in a high uptime environment• Software development and monitoring knowledge with Automated builds• Technology:o Cloud: AWS or Google Cloudo Source Control: Gitlab or Bitbucket or Githubo Container Concepts: Docker, Microserviceso Continuous Integration: Jenkins, Bambooso Infrastructure Automation: Puppet, Chef or Ansibleo Deployment Automation: Jenkins, VSTS or Octopus Deployo Orchestration: Kubernets, Mesos, Swarmo Automation: Node JS or Pythono Linux environment network administration, DNS, firewall and security management• Ability to be adapt to the startup culture, handle multiple competing priorities, meetdeadlines and troubleshoot problems.
• Works closely with the development team, technical lead, and Solution Architects within theEngineering group to plan ongoing feature development, product maintenance.• Familiar with Virtualization, Containers - Kubernetes, Core Networking, Cloud NativeDevelopment, Platform as a Service – Cloud Foundry, Infrastructure as a Service, DistributedSystems etc• Implementing tools and processes for deployment, monitoring, alerting, automation, scalability,and ensuring maximum availability of server infrastructure• Should be able to manage distributed big data systems such as hadoop, storm, mongoDB,elastic search and cassandra etc.,• Troubleshooting multiple deployment servers, Software installation, Managing licensing etc,.• Plan, coordinate, and implement network security measures in order to protect data, software, andhardware.• Monitor the performance of computer systems and networks, and to coordinate computer networkaccess and use.• Design, configure and test computer hardware, networking software, and operating systemsoftware.• Recommend changes to improve systems and network configurations, and determine hardware orsoftware requirements related to such changes.
Job Type : Full Time Job Location : HSR Layout, Bangalore Website : http://www.cyware.comLinkedIn : https://www.linkedin.com/company/cyware/ Media : https://cyware.com/press-and-media Who We Are? Cyware is aimed at revolutionizing the cybersecurity industry by developing innovative cybersecurity products and solutions. Cyware's mission is to empower organizations to adopt a proactive approach towards cybersecurity through strategic and tactical threat intelligence sharing and cyber fusion analysis. Cyware Labs is a product-based cybersecurity provider headquartered at New York, USA. Our pioneering solutions enable organizations to develop proactive cyber defence capabilities, effectively exchange strategic, tactical, and operational threat intelligence, and quickly respond to and manage security threats in real-time. Our mission is to revolutionize and simplify the security fabric to give truly integrated and intuitive solutions that provide a broad array of intel sharing, analytical and threat response functions across various platforms and mobile devices. Our unique products combine core facets of next-generation Security Operations Center (SOC) such as situational awareness, information sharing, cyber threat intelligence exchange, data fusion and threat response to give our clients needed visibility and control, and advanced defensive capabilities, with exceptional performance while helping them in transitioning to a truly next-gen SOC. Cyware solutions leverage advanced breakthroughs in machine learning, artificial intelligence and blockchain technologies to constantly challenge the security status quo and catalyze a growing ecosystem of empowered enterprises against the evolving threat landscape. Our clients include Fortune 500 financial, healthcare and defence organizations, multinational retail corporations, trade associations, industry groups (including ISACs and ISAOs), non-profits and government agencies. Why Cyware? At Cyware, we tackle cyber threats by bringing together the smartest Security Analysts, Technology Experts, and Data Scientists - all under one roof. We work relentlessly towards creating unique solutions and products to provide the most effective cyber defence solutions for our clients. We admire the risk takers who are willing to go beyond traditional barriers and challenge the status quo for the greater good of society. Cyware Labs is looking for sharp, intelligent, innovative and hardworking people having an experience of minimum 1 year as DevOps Engineer in enterprise/software product industry. Primary Responsibilities: - Work closely with development teams to integrate their projects into the production AWS environment and ensure their ongoing support once there. - Gain a deep application-level knowledge of the systems as well as contributing to their overall design. - Be a DevOps champion - work closely with other internal teams to build security, reliability, and scalability into the development life-cycle. - Dive deep into the software stack to troubleshoot as needed. - Build engineering automation and productivity tools to streamline and scale applications in the production environment. - Troubleshoot and resolve issues related to application development, deployment and operations. - Build from the ground up reliable infrastructure services in AWS to deliver highly scalable services. - Work with a team of peers who are smart, professional, pull their own weight, and share a passion for what they are creating. Skills and Qualifications : - Hands on experience on cloud services like AWS, Google cloud, Azure, etc. - Basic knowledge of Python/Django. - Experience with tools like chef, puppet, Ansible, etc. - Experience with a different queuing system like rabbitmq, Kafka, sqs, etc. - Ability to create infra services for both cloud as well as on-premise deployment - Basic knowledge of networking concepts like subnets, etc. - Experience with containers and orchestration tools like kubernetes. - Experience working in a startup - Exposure to Linux, EC2 Security, EC2 Balances, Automation Tools, AWS CLI, S3, Cloud Watch & Cloud Trail, SSH, Docker, Git, MLAB, Jenkins, Circle CI, Nagios, Jmeter & Blazemeter is a must. - Knowledge on Mongo DB, Python/Django, data structure and algorithms would be added advantage. Benefits of working at Cyware? We are a small but highly dedicated and motivated team that has created a powerful impact in the domain of cyber security in a short period of time. Join us and you will have: A chance to learn in a highly competitive environment. An opportunity to prove your mettle and enjoy a unique growth trajectory. Freedom to innovate and flexibility to operate. A Team that will support and help you to explore your true potential. An awesome atmosphere to work in Let's get in touch soon!
Summary: Build & operate infrastructure to support front-end, back-end and data-tools within the organisation. Helping teams become more autonomous and allowing the engineering teams to focus on improving the infrastructure and optimising processes Proactively seek out opportunities to improve monitoring and alerting of our hosts and services, and implement them in a timely fashion Scale existing backend systems to handle ever increasing amounts of traffic and new product requirements. Delivering system management tooling to the engineering teams Collaborate with other developers to understand & setup tooling needed for Continuous Integration/Delivery/Deployment (CI/CD) practices, preferably using Jenkins. Contributing to open source projects which are being used at the company Be an advocate for engineering best practices in and out of the company Sharing pager duty for critical systems Job Description: 3-10 Years Of Industry Experience Experience on working with distributed infrastructure stack Min 2 Years of Experience on working with AWS DNS, TCP/IP, Routing, HA & Load Balancing Ruby On Rails or Python and Bash skills Experience managing complex systems at scale Experience with Docker, rkt or similar container engine. Knowledge of Kubernetes or similar clustering solutions Experience with tools such as Ansible or Chef Understanding of the importance of smart metrics and alerting Hands on experience with cloud infrastructure provisioning, deployment, monitoring (we are on AWS and use ECS, ELB, EC2, Elasticache, Elasticsearch, S3, CloudWatch) Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Knowledge of data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Extra Points For Experience With NewRelic Experience with Data Warehousing GCP Kafka, Elasticsearch
Job Responsibilities: Managing the cloud deployment with thousands of VM’s and Containers with 100% uptime Budgeting the infra costs and plan for continued cost optimisation Managing and motivating the team members Designing the architecture to scale the back-end to meet the business requirements Requirements: Strong background in Linux fundamentals and system administration Good command on coding with scripting languages like Python and Shell scripting Experience with Docker and any one of container management systems like Kubernetes, Docker Swarm or Apache mesos Ability to use a wide variety of open source technologies and cloud services like AWS, GCP or Azure Experience with automation/configuration management using either Puppet, Chef or an equivalent Experience with Elasticsearch, Mongodb, Redis, Memcached, Kafka, RabbitMQ or ActiveMQ Knowledge of best practices and IT operations in an always-up, always-available service Good team management skills and communications skills Good experience with monitoring and alerting systems like Nagios, Zabbix Experience with CI and CD tools Educational Qualification: BE/B.Tech/MCA
DevOps Engineer Position: Full time Base Location: Bangalore, with extensive and frequent travel to SEA R&R: ● Work with people across various levels right from delivery team level to top management ● Support internal product and external customers on multiple platforms ● Working with customer teams (specifically development team) to analyse their process, and environments to improve user satisfaction ● Improve client Devops team by enabling them with DevOps concepts, and processes. ● Act as the technical expert across multiple client projects helping them in enhancing their delivery pipeline , and overall DevOps and Agile practices ● Identify state of the art CI/CD tools, prepare decision proposals, implementation plans for these tools and carry out introduction & training, allowing client delivery to move faster. ● Bring in new and cutting-edge methodologies in DevOps both for Greyamp and for clients. Need to have: ● 3 - 5 years of experience of relevant experience (at least 1 year experience in development, and 2+ in Devops) ● Understanding of Linux/Unix Administration ● Understanding of one scripting language (python, shell, ruby or pearl) ● Experience working with different OS Servers (RedHat, Oracle, Microsoft) ● Strong understanding of version control (GIT) ● Good understanding of build tools like ANT or Maven or gradle ● Experience with setting up relevant dashboard using tools around code quality and vulnerability checks (SonarQube) ● Experience implementing and maintaining CI/CD pipelines (Jenkins or Circle CI or GoCD) ● Good understanding and working experience with Docker Containers ● Experience working with configuration management tools(chef, puppet or ansible) ● Extensive knowledge on working with cloud platform and maintaining automation scripts using terraform or similar tools. ( preferably AWS or Azure or GCP ) ● Basic knowledge on working with the cloud product suite ( preferably AWS or Azure or GCP ) ● Understanding and experience working on Saas architecture ● Understanding and experience with VMware or other virtual environments. Nice to have: ● Experience working with Testing automation tools (Preferably Selenium) ● Knowledge and experience with Kubernetes ● Understanding and experience with Monitoring tools like ELK stack, prometheus and grafana, Nagios, or Dynatrace ● Experience working with DataBase deployments ● Client handling experience and stakeholder management
Below are the skill sets required: Must have skills1. Terraform2. NSX-T/VDI3. Atlassian - Bitbucket, Confluence, 4. JIRA5. Scripting experience Job Description:Engineers with terraform experience to supplement run team (NDE)Manage the MyIT ingest for security policy updatesSupport NSX based security policy deployed via terraform tied to Virtual Desktop Infrastructure (VDI),Pivotal Cloud Foundary (PCF) containers, and Infrastructure as a Service (IaaS) based virtual guestsAugment existing operational documentation for terraform, bitbucket, NSX-T policy deploymentFacilitate additional training and support for NDE movement into support role for Infrastructure as Code (IaC)Augment existing operational documentation for terraform, bitbucket, NSX-T policy deploymentSupport plan for migrating existing IaaS deployed apps onto General Population (GenPop) VLANS with NSX-T security policyTranslate policy definitions for audit reportingTerraform usage and custom scripting capabilities a mustComfortable with Atlassian suite including confluence, JIRAExperience with SCM and DevOps tool suites; examples include Git, Bitbucket, Bamboo, Jenkins, Concourse etc
Job Title: DevOps Engineer Work Experience: 3-7 years Qualification: B.E / M. Tech Location: Bangalore, India About Pramata Pramata’s unique, industry-proven offering combines the digitization of critical customer data currently locked in unstructured and obscure sources, then converts that data into high-quality, actionable information accessible through one or multiple applications through the Pramata cloud-based customer digitization platform. Pramata’s customers are some of the largest companies in the world including CenturyLink, Comcast Business, FICO, HPE, Microsoft, NCR, Novelis, and Truven Health IBM. Pramata has helped these companies and more find millions of dollars in revenue, ensure regulatory and pricing compliance, as well as enable risk identification and management across their customer, partner, and even supplier bases. Pramata is headquartered near San Francisco, California and has its Product Engineering and Solutions Delivery Center in Bangalore, India. How Pramata Works Pramata extracts essential intelligence about customer relationships from complex, negotiated contracts, simplifies it from legalese into plain English, synthesizes it with data from CRM, CLM, billing and other systems, and delivers it in the context of a particular user’s role and responsibilities. This is done through Pramata’s unique Digitization-as-a-Service (DaaS) process which transforms unstructured and diverse data into accurate, timely and meaningful digital information stored in the Pramata Digital Intelligence Hub. The Hub keeps the information centralized as one single, shared source of truth along with ensuring that this data remains consistent, accessible and highly secure. The opportunity - What you get to do You will be instrumental in bringing automation to development and testing pipelines, release management, configuration management, environment & application management and day-to-day support of development teams. You will manage the development of capabilities to achieve higher automation, quality and performance in automated build and deployment management, release management, on-demand environment configuration & automation, configuration and change management and in production environment support - Application monitoring, performance management and production support of mission-critical applications including application and system uptime and remote diagnostics - Security - Ensure that the highly sensitive data from our customers is secure at all times. - Instrument applications for performance baselines and to aid rapid diagnostics and resolution in case of system issues. - High availability and disaster recover - Build and maintain systems that are designed to provide 99.9% uptime and ensure that disaster recovery mechanisms are in place. - Automate provisioning and integration tasks as required to deploy new code. - Monitoring - Proactive steps to monitor complex interdependent systems to ensure that issues are being identified and addressed in real-time. Skills required: - Excellent communicator with great interpersonal skills, driving clarity about the intricate systems - Come with hands-on experience in application infrastructure technologies like Linux(RHEL), MySQL, Apache, Nginx, Phusion passenger, Redis etc. - Good understanding of software application builds, configuration management and deployments - Strong scripting skills like Shell, Ruby, Python, Perl etc. Comes with passion for automation - Comfortable with collaboration, open communication and reaching across functional borders. - Advanced problem-solving and task break-down ability. Additional Skills (Good to have but not mandatory): - In depth understanding and experience working with any Cloud Platforms (e.g: AWS, Azure, Google cloud etc) - Experience using configuration management tools like Chef, Puppet, Capistrano, Ansible, etc. - Being able to work under pressure and solve problems using an analytical approach; decisive, fast moving; and a positive attitude. Minimum Qualifications: - Bachelor’s Degree in Computer Science or a related field - Background in technology operations for Linux based applications with 2-4 years of experience in enterprise software - Strong programming skills in Python, Shell or Java - Experience with one or more of the following Configuration Management Tools: Ansible, Chef, Salt, Puppet - Experience with one or more of the following Databases: PostgreSQL, MySQL, Oracle, RDS
About Company:- Tracxn is a Bangalore based product company providing a research and deal sourcing platform for Venture Capital, Private Equity, CorpDevs & professionals working around the startup ecosystem. We are a team of 750+ working professionals serving customers across the globe. Our clients include Funds like Andreessen Horowitz, Sequoia Capital, Accel Partners, NEA; and Large Corporates such as ING, Societe Generale, LG and Royal Bank of Canada. We are backed by prominent investors like Ratan Tata, Nandan Nilekani, and SAIF Partners Founders:- - Neha Singh (ex-Sequoia, BCG | MBA - Stanford GSB) - Abhishek Goyal (ex-Accel Partners, Amazon | BTech - IIT Kanpur) Roles and Responsibilities:- - Design, Develop and Deliver products and frameworks (like Queuing System, Schedulers, etc) that will be used across all the Engineering teams in Tracxn. - Evaluating, Benchmarking and rolling out platform components like API Gateway, Load Balancers etc. - Driving centralized solutions like Service Discovery, Rate limiting etc for teams across Tracxn. - Extend or develop frameworks on top of docker to solve Tracxn's needs for scaling. - Working with Application Development teams to refactor the apps or build new modules to help onboard new architectures. Skills and Experience: - Must have experience in building fault-tolerant and scalable infrastructure. - Must have good conceptual, architectural & design skills. - Must have experience in at least one of the programming languages such as Java, C#, Python, Shell Script, Bash Script. - Must have experience in any one of the following databases like RDBMS, NoSQL databases. - Must have a deep understanding of how the software works at the systems level, familiarity with low-level aspects of performance, multi-threading, performance analysis, and optimization. - Good to have experience in working with Cloud Platforms like AWS, Google Cloud etc. - Good to have experience in Docker, Kubernetes, Ansible, Chef, Puppet. - Good to have experience in frameworks such as Kafka, HAproxy, Nginx. - Team management experience will be an added bonus. What we have to offer? - Work with a performance oriented team driven by ownership. - Learn to design system for high accuracy, efficiency, and scalability. - Focus on delivering quality work rather than deadlines. - Meritocracy driven, candid culture. - Very high visibility regarding startups ecosystem. Above all, you love to build and ship products that Customers will use every day.
About HealthifyMe : We were founded in 2012 by Tushar Vashisht and Sachin Shenoy, and incubated by Microsoft Accelerator. Today, we happen to be India's largest and most loved health & fitness app with over 4 million users from 220+ cities in India. What makes us unique is our ability to bring together the power of artificial intelligence powered technology and human empathy to deliver measurable impact in our customers' lives. We do this through our team of elite nutritionists & trainers working together with the world's first AI-powered virtual nutritionist - "Ria", our proudest creation till date. Ria references data from over 200 million food & workout logs and 14 million conversations to deliver intelligent health & fitness suggestions to our customers. Ria also happens to be multi-lingual, "she" understands English, French, German, Italian & Hindi. Recently Russia's Sistema and Samsung's AI focussed fund - NEXT, led a USD 12 Million Series B funding into our business. We are the most liked app in India across categories, we've been consistently rated as the no:1 health & fitness app on play store for 3 years running and received Google's "editor's choice award" in 2017. Some of the marquee corporates in the country such as Cognizant, Accenture, Deloitte, Metlife amongst others have also benefited from our employee engagement and wellness programs. Our global aspirations have taken us to MENA, SEA and LATAM regions with more markets to follow. Desired Skills & Experience : Requirements : - Background in Linux/Unix Administration - Experience with Automation/Configuration Management using either Puppet, Chef or an equivalent - Ability to use a wide variety of Open Source Tools - Exp with AWS is a must. - Hands-on exp with at least 3 of RDS, EC2, ELB, EBS, S3, SQS, CodeDeploy, CloudWatch - Strong exp in managing SQL and MySQL Databases - Hands on exp in Managing Web Servers Apache, Nginx, Lighttpd, Tomcat (Apache Exp is valued) - A working understanding of Code and Script (PHP, Python, Perl and/or Ruby) - Knowledge of best practices and IT operations in an always-up, always-available service - Good to have exp in Python and NodeJS Stack : - Python/Django, MySQL, NodeJS+MongoDB - ElasticCache, DyanmoDB, SQS, S3 - Deployed in AWS We'd love to see : - Experience in Python and data modeling - Git and distributed revision control experience - High-profile work on commercial or open-source projects - Take ownership in your work, and understand the need for code quality, elegance, and robust infrastructure Look forward to : - Working with a world-class team. - Fun & work at the same place with an amazing work culture and flexible timings. - Get ready to transform yourself into a health junkie Join HealthifyMe and make history!
Site Reliability EngineerJob Description :You will be administering the infrastructure of an indigenous one-of-a-kind artificial intelligence cloud platform. You will be working with the dev teams to deploy, monitor and scale the distributed platform to handle real time AI analysis and loads and loads of visual data (images and videos in various formats). We're looking for people with extensive dev-ops experience and strong system programming skills.Responsibilities :1. You will be responsible for the up time and reliability of infrastructure of SigTuple and help backend teams achieve it by writing reliable software and automation2. Work with other development teams to automate deployment of modules and manage the build and release pipeline.3. Extensive process-level and node-level monitoring and auto healing of entire cluster.4. Managing, provisioning and servicing cloud servers.5. Contribution to back-end services to contribute to its infrastructure system design.Requirements :1. BTech/MTech in any engineering discipline.2. 3-6 years of experience in an Dev-Ops/Software Engineering role.3. Experience in management of cloud computing services. Extensive knowledge of any one cloud platform (Kubernetes, AWS, GCP, Azure etc.)4. Proficiency with any major monitoring framework (Sensu, Nagios etc.).5. Comfortable with any one scripting language (Python, Perl) and a Configuration management or Orchestration Tool (Ansible, Chef etc)6. Proficiency with OS and network fundamentals and strong Linux administrator skills.7. Experience with Container Tools (Docker ecosystem) will be a plus8. Experience of working with issues of scale of a system.9. Experience of working in a startup is a plus.