Balbix is looking for a senior engineer/architect for its connector platform development. You will play a key role in choosing the technology stack, architecture, feature set of Balbix's connector initiative. Balbix consumes information from many different sources with the enterprise, such as Active Directory servers, switches and routers.Balbix is looking to create a catalog of connectors, including to cloud environments such as Azure, GCP and AWS.As the initial member of the connector team you will collaborate with Balbix's product management, and technical leads to ensure scalable, rapid, high quality connectors. You will also have the chance to build a team around you to deliver the connector catalog to solve real-world problems plaguing cybersecurityJob Responsibilities :1. Architect, design and build a scalable factory and associated infrastructure to allow the rapid development of connectors to a wide range of systems and components.2. Architect, design, and test APIs between the connectors and the Balbix Brain.3. Streamline the DevOps lifecycle to ensure continuous deployment and delivery into production.4. Hire and lead an agile team of engineers and testers.Job Requirements :1. Extensive experience in cross platform development using a variety of programming languages such as Python, C++ and Java.2. Extensive experience with software architecture and API design.3. Extensive experience in building high quality products using automation.4. Experience setting up CI/CD pipelines with rapid deployment into production- Ability to hire a world class agile development and testing team.5. Excellent communications skills.6. Bachelors degree in CS or related discipline.7. Motivation and passion for building world class enterprise products.8. Experience in working in Scrum workflowThis role represents a unique opportunity to join a hyper-growth company in a key role where you can make a big impact on the trajectory of the company.
What are we looking for?? You are an engineer with an eye for constant improvement. You not only look at improving the code but also tooling, the commands you use, the user facing documentation and everything that makes great and beautiful products possible. You don’t throw around words such as “high availability” or “resilient systems” without understanding at least their basics. Because you know that words are easy to talk but there is a fair amount of work to build such a system in practice. You love coaching people - about the 12 factor apps or the latest tool that reduced your time of doing a task by X times and so on. You lead by example when it comes to technical work and community. You will be hands on contributor but you also love to scale people in your team by guiding and building them. You can talk fluently to computers: It does not matter if it is Python, Go, Java, NodeJS or any other widely used and known programming language. Till you know one and you know it well, you fit right in. You believe languages are just tools to solve problems. You have some exposure/understanding of systems. You have worked on some systems and are curious about the once you have not worked on. You can express ideas and your opinions and don’t shy to say no if you don’t know something. We are not hiring Wikipedia after all, are we? What you will be learning and doing? You will be working on cloud native technologies such as Kubernetes, Prometheus, Service Meshes like LinkerD, Istio etc. You will most likely be programming in Go or Python (not to worry if you have not used them before, some of our best engineers started fresh on these languages after they joined us.) You understand the needs of customers and can translate them into solutions that work and scale using open source cloud native technologies. You manage a team of technical engineers but you also love to get your hands dirty with a new tool or a new framework from time to time. You build your own perspective and viewpoint on things because you don’t believe ivory tower architects are effective. You will potentially contribute to open source projects as part of your work and work on applying those technologies in the context of a customer problem.
Mandatory: Docker, AWS, Linux, Kubernete or ECS Prior experience provisioning and spinning up AWS Clusters / Kubernetes Production experience to build scalable systems (load balancers, memcached, master/slave architectures) Experience supporting a managed cloud services infrastructure Ability to maintain, monitor and optimise production database servers Prior work with Cloud Monitoring tools (Nagios, Cacti, CloudWatch etc.) Experience with Docker, Kubernetes, Mesos, NoSQL databases (DynamoDB, Cassandra, MongoDB, etc) Other Open Source tools used in the infrastructure space (Packer, Terraform, Vagrant, etc.) In-depth knowledge on Linux Environment. Prior experience leading technical teams through the design and implementation of systems infrastructure projects. Working knowledge of Configuration Management (Chef, Puppet or Ansible preferred) Continuous Integration Tools (Jenkins preferred) Experience in handling large production deployments and infrastructure. DevOps based infrastructure and application deployments experience. Working knowledge of the AWS network architecture including designing VPN solutions between regions and subnets Hands-on knowledge with the AWS AMI architecture including the development of machine templates and blueprints He/she should be able to validate that the environment meets all security and compliance controls. Good working knowledge of AWS services such as Messaging, Application Services, Migration Services, Cost Management Platform. Proven written and verbal communication skills. Understands and can serve as the technical team lead to oversee the build of the Cloud environment based on customer requirements. Previous NOC experience. Client Facing Experience with excellent Customer Communication and Documentation Skills
Work on the toughest problems in stock markets. Lead our efforts to reach 99.999% availability for 25+ services across 200+ servers. Summary We are building the fastest, most reliable & intelligent trading platform. That requires highly available, scalable & performant systems. And you will be playing one of the most crucial roles in making this happen. You will be leading our efforts in designing, automating, deploying, scaling and monitoring all our core products. Tech Facts so Far 1. 8+ services deployed on 50+ servers2. 35K+ concurrent users on average 3. 1M+ algorithms run every min 4. 100M+ messages/min We are a 4-member backend team with 1 Devops Engineer. Yes! this is all done by this incredible lean team. Big Challenges for You 1. Manage 25+ services on 200+ servers2. Achieve 99.999% (5 Nines) availability 3. Make 1-minute automated deployments possible If you like to work on extreme scale, complexity & availability, then you will love it here. Who are we We are on a mission to help retail traders prosper in the stock market. In just 3 years, we have the 3rd most popular app for the stock markets in India. And we are aiming to be the de-facto trading app in the next 2 years. We are a young, lean team of ordinary people that is building exceptional products, that solve real problems. We love to innovate, thrill customers and work with brilliant & humble humans. Key Objectives for You • Spearhead system & network architecture • CI, CD & Automated Deployments• Achieve 99.999% availability • Ensure in-depth & real-time monitoring, alerting & analytics• Enable faster root cause analysis with improved visibility• Ensure a high level of security Possible Growth Paths for You • Be our Lead DevOps Engineer • Be a Performance & Security Expert Perks • Challenges that will push you beyond your limits • A democratic place where everyone is heard & aware
We are looking for a Node.js Developer responsible for managing the interchange of data between the server and the users. Your primary focus will be the development of all server-side logic, definition and maintenance of the central database, and ensuring high performance and responsiveness to requests from the front-end. You will also be responsible for integrating the front-end elements built by your co-workers into the application. Therefore, a basic understanding of front-end technologies is necessary as well. The Candidate should have knowledge and experience with specified technologies: Great at building server-side code in NodeJs & ExpressJs Experience with NoSQL databases such as MongoDB Experience in building and consuming REST API Authentication and authorisation using JWT tokens Proficient understanding of code versioning tools, such as Git DevOps knowledge especially in AWS EC2 and S3 Knowledge of headless Chrome, Puppeteer will be huge plus Experience in HTML, CSS Understanding of React will be a bonus
What we look for? You have worked with programmable infrastructure in some way - building a CI/CD pipeline or provisioning infrastructure with programs or provisioning monitoring and logging infrastructure for large sets of machines. You love automating things, sometimes even what seems like you can’t automate - such as one of our engineers used Ansible to setup his Ubuntu machine and runs a playbook everytime he has to install something :) You don’t throw around words such as “high availability” or “resilient systems” without understanding them at least at a basic level. Because you know that the words are easy to talk but there is a fair amount of work to build such a system in practice. You love coaching people - about the 12 factor apps or the latest tool that reduced your time of doing a task by X times and so on. You know that DevOps - is meant to enable Developers do things better and faster! You understand the areas you have worked on very well but you are curious about many systems that you may not have worked on and want to fiddle with them. You know that understanding applications and the runtime technologies gives you a better perspective - you never looked at them as two different things. What you will learn and do? You will work with customers trying to transform their applications and adopting the cloud native technologies. The technologies used will be Kubernetes, Prometheus, Service Mesh, Distributed tracing and public cloud technologies. The problems are and solutions are continuously evolving in the space but fundamentally you will solve problems with simplest and scalable automation. You will build open source tools for problems that you think are common across customers and industry. No one ever benefited from re-inventing the wheel, did they? You will hack around open source projects, understand the capabilities and limitations and apply the right tool for the right job. You will educate the customers - from their operations engineers to developers on scalable ways to build and operate applications in modern cloud native infrastructure. We think InfraCloud is a rocketship you should join! InfraCloud has been working in cloud native technologies with early innovators before Kubernetes was 1.0 and when it seemed like Mesos will become the standard! Our focus & history on area of programmable infrastructure coupled with working with some innovative product companies gives us some solid engineering challenges to work on. From one of our hackathons was born BotKube Project (https://github.com/infracloudio/botkube) which was developed by our engineers and community over the last 1.5 years. When we started developing BotKube’s Microsoft Teams integration, another project was born - Go SDK for Teams (https://github.com/infracloudio/msbotbuilder-go). We are also the second largest contributor as a company to Fission - a Serverless framework for Kubernetes (http://github.com/fission/fission). Another time an engineer working with a telecom company added support for 128bit tracing ID in Jaeger. These are just some examples - and there are many more - do make a point to ask the engineers you talk to about more open source work we do. Our engineers are co-organizers of Kubernetes Pune, Docker Pune, Python Pune and can be found frequently speaking at local meetups and conferences.
Description We are developing the world’s first enterprise-level Platform-as-a-Service (PaaS) for robots, creating a rare opportunity for an experienced, product-focused engineering professional. The PaaS aims to aid and offer innovative features to handle every part of the product lifecycle required to support and deliver consumer-facing connected machines and services. Site Reliability Engineering combines skills of software and systems engineering. Your key responsibility is to focus on optimizing existing systems, building infrastructure, and eliminating work through automation to make them more reliable and ensure the highest possible uptime for a cloud-based robotics system. Your responsibilities will include the following but not limited to: Support services before they go live through activities such as system design consulting, capacity planning, and launch reviews Maintain services once they are live by measuring and monitoring availability, latency, and overall system health Engage in and improve the whole lifecycle of services—from inception and design, through deployment, operation, and refinement Scale systems sustainably through mechanisms like automation, and evolve systems by pushing for changes that improve reliability and velocity Practice sustainable incident response and postmortems Build and evolve the operations handbook Requirements Bachelor’s degree in Computer Science or a similar technical field of study, or equivalent practical experience with an outstanding track record At least 2 years of experience in product development and/or supporting operations Mastery of one or more of the following programming languages including but not limited to Python, Golang, Ruby, Bash Familiar with Config Management, Docker, IaaS, PaaS, Continuous Delivery, Continuous Integration, DevOps. Solid understanding of network fundamentals and practical experience troubleshooting networked services Demonstrated proficiency with: Linux systems, public cloud platforms, and associated tools/technologies Fluency in English Preferred Qualifications Extremely organised, detail oriented and thorough in every undertaking Ability to balance multiple tasks and projects effectively and quickly adapt to new variables Experience in designing, analysing and troubleshooting distributed systems Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive Ability to debug and optimize code and automate routine tasks Benefits Competitive salary Stock options International working environment Bleeding edge technology Working with exceptionally talented engineers
YOE: 1- 3yearsSkill: Python, Docker or Ansible , AWS➢ Experience Building a multi-region highly available auto-scaling infrastructure that optimizesperformance and cost. plan for future infrastructure as well as Maintain & optimize existinginfrastructure.➢ Conceptualize, architect and build automated deployment pipelines in a CI/CD environment likeJenkins.➢ Conceptualize, architect and build a containerized infrastructure using Docker,Mesosphere orsimilar SaaS platforms.Work with developers to institute systems, policies and workflows which allow for rollback ofdeployments Triage release of applications to production environment on a daily basis.➢ Interface with developers and triage SQL queries that need to be executed inproductionenvironments.➢ Maintain 24/7 on-call rotation to respond and support troubleshooting of issues in production.➢ Assist the developers and on calls for other teams with post mortem, follow up and review ofissues affecting production availability.➢ Establishing and enforcing systems monitoring tools and standards➢ Establishing and enforcing Risk Assessment policies and standards➢ Establishing and enforcing Escalation policies and standards
Job Description DevOps Engineer to own our AWS & Azure infrastructure. This DevOps Engineer position is a mixture of typical build and release engineering activities, DevOps tasks, Cloud administration and server management activities. This role will be responsible for automation, scaling, and deployments in AWS & Azure. The ideal candidate will have experience with CI/CD pipelines. What You Will Do Build Operations: Schedule, perform and notify stakeholders of software builds. This includes compiling and deploying Python, Angular JS, Node JS,.NET web applications and SQL changes to target environments through a customized Team Build process Work with all parties to resolve change-related scheduling conflicts according to established practices Collaborate with QA and Development managers on timing and content of builds, releases, and patches Build Automation and Configuration Management: Work with development teams to ensure configuration management requirements are defined and solutions accurately designed for new websites and software products Research and recommend improvements in procedures and tools Share in developing effective configuration management strategies and solutions for existing and new products Website, Database, Software, and Server Management Set up and configure new API & applications and databases: Develop and maintain tools to automate website setup, database rebaselining, or server management as needed under the direction of the Systems Architect Install, configure, and customize internal and 3rd party software and utilities Create and maintain application administration, server administration, and server setup documentation User Training and Support: Provide expertise, support, and training Microsoft TFS. Manage TFS code repository. Maintain and improve practices of branching, code merge, etc. Create and maintain workstation setup and configuration management policies and procedures documentation Provide assistance and training for a variety of configuration management procedures and tools What You Will Need Bachelor’s degree or equivalent experience in computer science or similar Knowledge of fundamental software engineering principles and relational database principles Experience administering AWS & Azure infrastructure Experience in administering Team Foundation Server or VSTS in the cloud Experience with scripting; Knowledge of PowerShell a plus Experience in build and release processes and procedures and source code management Knowledge with C#/ASP.NET Knowledge with T-SQL Knowledge of Jenkins Knowledge of CI/CD processes Competencies Strong organizational skills and attention to detail Exceptional time management skills Positive outlook, patience, and strong relationship-building skills Conduct yourself with business integrity Strong sense of urgency Take ownership Embrace humility and teamwork Ability to connect and influence others Self-motivated with a demonstrated ability to take initiative Adaptability to change
People Ready to attend at least one round of F2F interview only should apply.Job Description: - Linux: 4 or more years in Unix systems engineering with experience in Red Hat Linux, Centos or Ubuntu. - AWS: Working experience and a good understanding of the AWS environment, including VPC, EC2, EBS, S3, RDS, SQS, Cloud Formation, Lambda, and Redshift. - DevOps: Experience with DevOps automation - Orchestration/Configuration Management and CI/CD tools (Ansible, Chef, Puppet, Jenkins, etc.). Puppet and Ansible experience a strong plus. - Programming: Experience programming with Python, Bash, REST APIs, and JSON encoding. - Version Control: Nice to have Git experience. - Testing: Be very familiar with CI / CD, good in scripting like Phyton, Unix, etc - Security: Experience implementing role-based security, including AD integration, security policies, and auditing in a Linux/Hadoop/AWS environment. - Monitoring: Hands-on experience with monitoring tools such as AWS CloudWatch, Nagios or Splunk. - Backup/Recovery: Experience with the design and implementation of big data backup/recovery solutions. - Networking: Working knowledge of TCP/IP networking, SMTP, HTTP, load-balancers (ELB, HAProxy) and high availability architecture. - Ability to keep systems running at peak performance, upgrade the operating system, patches, and version upgrades as required - Implementation of Auto Scaling for instances under ELB using ELB Health Checks. - Work experience on S3 bucket. - IAM and its policy management to restrict users to particular AWS Resources. - Strong ability to troubleshoot any issues generated while building, deploying and in production support - Experience in Performance tuning, garbage management, and memory leaks, networking and information security, IO monitoring and analysis. - Experience in resolving escalated tickets, performance problems and creating Root Cause Analysis (RCA) document on Severity 1 issues. - Achieved disaster recovery implementations and participated in a 24/7 on-call rotation policy with another team member.
Detailed Job Description / Essential Duties and Responsibilities: The role requires deep expertise in delivering modern web application solutions on the Open Source Tech platform, as well as ability to communicate how these solutions drive business value. Detailed responsibilities include - Lead Engineering team and ensure customer requirements and needs across the current and future targeted segments are met. Work with Product Owner / US counter parts / SME’s to understand the requirements and prioritize development efforts. Work in capacity of Scrum Master role and manage development through Agile - Scrum framework. Activities include Sprint planning, Preside Scrum meetings, Sprint Retrospective, Publish reports, etc. Responsible for creating User stories, Epics in Project management tool based on the requirements received from various stakeholders including clients. Manage all aspects of engineering operations including Product Design, Development, Testing and Deployment. Architect the applications / providing recommendations to choose the right technologies for the products to be built. Manage Operational efforts for the contracted clients and ensure the SLA’s with required quality are met. Manage Product support team and ensure the SLA’s for response and resolutions are provided to client that meets or exceeds the customer’s expectations. Build / Guide Developers in building Full-Stack Web applications at Internet scale. Code Review and provide feedback to ensure timely delivery with required quality. Programming in MEAN stack or similar open source technologies and framework Data Modelling using NoSQL and Relational Database management Systems UI / UX Design and Implementation Web Development with BootStrap, Angular Material, and other libraries Develop Data / File parsing programs using Python or other programming languages Create methods and algorithms for data analysis and optimization Develop curriculum for talent development, Knowledge Sharing, etc. Establish and maintain cooperative working relationships with all stakeholders involved. Willing, capable and excited to roll up the sleeves to code when needed. Having exceptional expertise in one of the following areas – Backend / Front end Web Application Framework Serverless Architecture Micro Services Platforms AWS Services
DevOps Engineer Skill Set – Must have AWS: 2+ years’ experience with using a broad range of AWS technologies (e.g. EC2, RDS, ELB, S3, VPC, IAM, CloudWatch) to develop and maintain AWS based cloud solution, with emphasis on best practice cloud security. DevOps: Solid experience as a DevOps Engineer in a 24x7 uptime AWS environment, including automation experience with configuration management tools. Strong scripting skills and automation skills. Expertise in Linux system administration. Beneficial to have Basic DB administration experience (MySQL) Working knowledge of some of the major open source web containers & servers like apache, tomcat and Nginx. Understanding network topologies and common network protocols and services (DNS, HTTP(S), SSH, FTP, SMTP). Experience on Docker, Ansible & Python. Role & Responsibilities: Deploying, automating, maintaining and managing AWS cloud-based production system, to ensure the availability, performance, scalability and security of productions systems. Build, release and configuration management of production systems. System troubleshooting and problem solving across platform and application domains. Ensuring critical system security using best in class cloud security solutions.
Requirements: • Hands on programming skills on developing automation modules on Python/Go/Bash • Require hands on experience with both private and public cloud infrastructure and interfacing programmatically through APIs. • Relevant DevOps/Operations/Development experience working under Agile DevOps culture on large scale distributed systems in production infrastructure. • Successful track record of providing production support for large-scale distributed systems, with experience in Docker Swarm and Kubernetes. • Hands on experience on continuous integration and build tools like Jenkins along with version control system like Git/SVN and ticketing system like Jira/Bugzilla. • Experience on setting up and managing DevOps tools on Repository, Monitoring, and Log Analysis etc. • Solid understanding of Applications, Networking and Open source tools with working knowledge on clusters and service discovery systems. • Working knowledge in Security and Vulnerability detections. • Should have done production migration and update/upgrades with minimum or zero down time. • Good knowledge on Security, Audits and compliance requirements