Role and Responsibilities: - Solve complex Cloud Infrastructure problems. - Drive DevOps culture in the organization by working with engineering and product teams. - Be a trusted technical advisor to developers and help them architect scalable, robust, and highly-available systems. - Frequently collaborate with developers to help them learn how to run and maintain systems in production. - Drive a culture of CI/CD. Find bottlenecks in the software delivery pipeline. Fix bottlenecks with developers to help them deliver working software faster. Develop and maintain infrastructure solutions for automation, alerting, monitoring, and agility. - Evaluate cutting edge technologies and build PoCs, feasibility reports, and implementation strategies. - Work with engineering teams to identify and remove infrastructure bottlenecks enabling them to move fast. (In simple words you'll be a bridge between tech, operations & product) Skills required: Must have: - Deep understanding of open source DevOps tools. - Scripting experience in one or more among Python, Shell, Go, etc. - Strong experience with AWS (EC2, S3, VPC, Security, Lambda, Cloud Formation, SQS, etc) - Knowledge of distributed system deployment. - Deployed and Orchestrated applications with Kubernetes. - Implemented CI/CD for multiple applications. - Setup monitoring and alert systems for services using ELK stack or similar. - Knowledge of Ansible, Jenkins, Nginx. - Worked with Queue based systems. - Implemented batch jobs and automated recurring tasks. - Implemented caching infrastructure and policies. - Implemented central logging. Good to have: - Experience dealing with PI information security. - Experience conducting internal Audits and assisting External Audits. - Experience implementing solutions on-premise. - Experience with blockchain. - Experience with Private Cloud setup. Required Experience: - B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience. - You need to have 2-4 years of DevOps & Automation experience. - Need to have a deep understanding of AWS. - Need to be an expert with Git or similar version control systems. - Deep understanding of at least one open-source distributed systems (Kafka, Redis, etc) - Ownership attitude is a must.
WHAT WILL I DO? You will work as a Site Reliability Engineer responsible for the availability, performance, monitoring, and incident response, among other things, of the platforms and services used and owned by Shuttl. The SRE Team works alongside the Engineering team and owns every aspect of service availability as well as disaster recovery and business continuity plans. You will work with other Site Reliability Engineers and report to the Lead of Site Reliability Engineering Team. HOW DO WE WORK? Our engineering process is a five step process which consists of phases for planning, developing, testing & profiling, releasing and monitoring. The planning phase consists of documenting of the feature/task to be done followed by various discussions. These discussions cover product, delivery estimates, release plan, monitoring plan, test plans, architecture, code design, technology choices and best practice adoption. The development and testing phase coexist and involve writing code, unit tests, performance tests, profiling, stress testing, code reviews and QA testing. This phase is punctuated with daily scrums and standups. The release phase is largely about managing and communicating the release to customers and internal stakeholders and activating features. The last phase is the monitoring phase where relevant metrics and exceptions are tracked and any critical refinement for the delivered feature is undertaken. This phase culminates with a retrospective. SREs get involved in this process as early as possible to provide general guidance, recommendations and help with designing the application to be in compliance with community standards such as CNCF and 12 Factor. SRE involvement and influence tends to increase during mid to final stages of development where the application is primed for beta evaluation and all the tooling and instrumentation is finalized. WHAT SKILLS SHOULD I HAVE? For this role we expect you to have 3+ years of experience working as a DevOps Engineer or SRE. You should have a good grasp of Unix like systems, access control, networking nuances, process isolation by the means of kernel provided features, distributed applications and algorithms, job schedulers and secret management among other things. At Shuttl we are a big proponent of Immutable infrastructure. All our infrastructure is hosted with Amazon Web Services and we use Hashicorp's Terraform to manage the infrastructure as code. A good handle on AWS and Terraform is therefore a definitive plus. Since SREs are expected to write a lot of code, you are also expected to be skillful in a programming language, preferably Python or Go.
Base Location: Bangalore, with extensive and frequent travel to SEA R&R: ● Work with people across various levels right from delivery team level to top management ● Support internal product and external customers on multiple platforms ● Work with customer teams to analyse their environments to improve user satisfaction ● Understand the pain points and recommend solutions to automate the end to end cycle. ● First point of contact for providing guidance and recommendations to increase efficiency and reduce customer incidents ● Improve client Devops team by enabling them with DevOps concepts, processes and strategies. ● Act as the technical expert across multiple client projects helping them in enhancing their delivery pipeline, and overall DevOps and Agile practices ● Identify state of the art CI/CD tools, prepare decision proposals, implementation plans for these tools and carry out introduction & training, allowing client delivery to move faster. ● Bring in new and cutting-edge methodologies in DevOps both for Greyamp and for clients. Need to have: ● 6 - 8 years of experience of relevant experience (at least 2 year experience in development, and 4+ in Devops) ● Strong understanding of Linux/Unix Administration ● Strong understanding of one scripting language (python, shell, ruby or pearl) ● Experience working with different OS Servers (RedHat, Oracle, Microsoft) ● Strong understanding of version control (GIT) ● Good understanding of build tools like ANT or Maven or gradle ● Experience working with Testing automation tools (Preferably Selenium) ● Experience with setting up relevant dashboard using tools around code quality and vulnerability checks (SonarQube) ● Experience in implementing the end to end CI/CD setup for an organization ● Experience implementing and maintaining CI/CD pipelines (Jenkins or Circle CI or GoCD) ● Good understanding and working experience with Docker Containers and Kubernetes ● Experience working with configuration management tools(chef, puppet or ansible) ● Extensive knowledge on working with cloud platform by writing automation scripts using terraform or similar tools. ( preferably AWS or Azure or GCP ) ● Hands on experience on working with the cloud product suite ( preferably AWS or Azure or GCP ) ● Understanding and experience working on SaaS architecture ● Understanding and experience with Monitoring tools like ELK stack, prometheus and grafana, Nagios, or Dynatrace ● Understanding and experience with VMware or other virtual environments. ● Experience working with DataBase deployments ● Client handling experience, stakeholder management and team handling Nice to have: ● International exposure/ having worked with cross-cultural and distributed teams ● Setting up of PaaS within an organisation
Design and implement CI/CD for SaaS. Experience in Git administration Experience in Linux administration and shell scripting, Ansible, Chef, Puppet experience in Server hardening, Terraform scripts Experience in building and administering VMs and containers using Dockers, Kubernetes Experience in AWS, Google Cloud Services or Azure Experience in Python, Perl or Ruby is a plus Certification in AWS is a plus
Sr. DevOPS Engineer About Us: Led by former Salesforce.com and Siebel executive, Chuck Ganapathi, Tact.ai is on a mission to make enterprise software more human-friendly. Tact.ai’s conversational AI sales platform is used by sales teams at GE, Cisco Systems, Kelly Services and other Fortune 500 companies to drive revenue growth by eliminating friction in their daily sales workflow. Headquartered in Redwood City, CA., Tact.ai Technologies, Inc. is a privately-held company backed by Accel Partners, Redpoint Ventures, Upfront Ventures, M12 (formerly Microsoft Ventures), Comcast Ventures, Salesforce Ventures and the Amazon Alexa Fund. Responsibilities: We’re looking for a passionate, security-minded, DevOps/IT Engineer with AWS and Azure experience to make an immediate impact on our operations team. Tact.ai is a well-funded, early stage startup with a world-class product and team and a growing customer base. Some of your main responsibilities include: Maintain and improve our cloud infrastructure, orchestration and deployment process Improve the developer experience and on-boarding Requirements: Passion for clean and elegant code Experience shipping code (Ruby on Rails and Java) Experience with configuration management tools (Ansible, Chef or Puppet) Strong problem-solving skills and willingness to work with various teams to find solutions Bonus Points: Experience with start-ups and small teams Familiarity with Azure Active Directory, Single Sign-On, Salesforce
About Company:- Tracxn is a Bangalore based product company providing a research and deal sourcing platform for Venture Capital, Private Equity, CorpDevs & professionals working around the startup ecosystem. We are a team of 750+ working professionals serving customers across the globe. Our clients include Funds like Andreessen Horowitz, Sequoia Capital, Accel Partners, NEA; and Large Corporates such as ING, Societe Generale, LG and Royal Bank of Canada. We are backed by prominent investors like Ratan Tata, Nandan Nilekani, and SAIF Partners Founders:- - Neha Singh (ex-Sequoia, BCG | MBA - Stanford GSB) - Abhishek Goyal (ex-Accel Partners, Amazon | BTech - IIT Kanpur) Roles and Responsibilities:- - Design, Develop and Deliver products and frameworks (like Queuing System, Schedulers, etc) that will be used across all the Engineering teams in Tracxn. - Evaluating, Benchmarking and rolling out platform components like API Gateway, Load Balancers etc. - Driving centralized solutions like Service Discovery, Rate limiting etc for teams across Tracxn. - Extend or develop frameworks on top of docker to solve Tracxn's needs for scaling. - Working with Application Development teams to refactor the apps or build new modules to help onboard new architectures. Skills and Experience: - Must have experience in building fault-tolerant and scalable infrastructure. - Must have good conceptual, architectural & design skills. - Must have experience in at least one of the programming languages such as Java, C#, Python, Shell Script, Bash Script. - Must have experience in any one of the following databases like RDBMS, NoSQL databases. - Must have a deep understanding of how the software works at the systems level, familiarity with low-level aspects of performance, multi-threading, performance analysis, and optimization. - Good to have experience in working with Cloud Platforms like AWS, Google Cloud etc. - Good to have experience in Docker, Kubernetes, Ansible, Chef, Puppet. - Good to have experience in frameworks such as Kafka, HAproxy, Nginx. - Team management experience will be an added bonus. What we have to offer? - Work with a performance oriented team driven by ownership. - Learn to design system for high accuracy, efficiency, and scalability. - Focus on delivering quality work rather than deadlines. - Meritocracy driven, candid culture. - Very high visibility regarding startups ecosystem. Above all, you love to build and ship products that Customers will use every day.