- Develop backend services in Go Language
- Write code to handle the scale of thousands of requests per second
- Following best practices of XP/Agile like TDD, SOLID principles, pair programming, etc.
- Deal with cloud-native services and Debug issues on a live setup
- 3 to 8 years Development and delivery experience with Go
- Good knowledge of RESTful API web services and good experience with API Frameworks
- Familiarity with code versioning tools(Git)
- Experience writing scalable solutions and having knowledge on big data tools like Kafka
- Hands-on experience in analysis, design, coding, and implementation of complex, custom-built applications.
- Great Object-Oriented and functional programming skills, including strong design pattern knowledge.
- Familiarity with different databases, like PostgreSQL, MongoDB, Neo4j, etc
- Strong communication and client-facing skills.
- Effectively work and collaborate with teams across different time zones.
- Experience with Microservices, Microservice Principles (Service discovery, API gateways etc) knowledge is a bonus.
- Familiarity with at least one cloud service (Heroku/AWS/GCP/Azure) will be good to have.
- Experience with API security standards and implementation (OAuth).
About CoffeeBeans Consulting
Vakilsearch is India's largest online legal, tax and compliance provider. Vakilsearch, through its products and end-to-end workflow automation journey has revolutionized how Start-ups/ Small & Medium Enterprises register, seamlessly run and comply with Government regulations. On our mission to provide one-click access to individuals and businesses for all their legal and professional needs, we have helped over 4 Lac start-ups/ small and medium enterprises to date.
We are looking at extremely talented programmers/ problem solvers to join us in disrupting the legal/ tax and compliance product space.
We are looking for a passionate DevOps or DevSecOps Engineer, who can support deployment, monitor our production, QE, Performance, and Staging environments. Applicants should have a strong understanding of UNIX internals and should be able to clearly articulate how it works. Knowledge of shell scripting & security aspects is a must. Any experience with infrastructure as code is a big plus. The key responsibility of the role is to manage deployments, security, and support of business solutions. Having experience in database applications like Postgres & Ruby on Rails is a huge plus. At VakilSearch. Experience doesn't matter, passion to produce change matters.
Responsibilities and Accountabilities:
- As part of the DevOps team, you will be responsible for configuration, optimization, documentation, and support of the infra components of VakilSearch’s product which are hosted in cloud services & on-prem facility
- Design, build tools and framework that support deploying and managing our platform & Exploring new tools, technologies, and processes to improve speed, efficiency, and scalability
- Support and troubleshoot scalability, high availability, performance, monitoring, backup, and restore of different Env
- Manage resources in a cost-effective, innovative manner including assisting subordinates ineffective use of resources and tools
- Resolve incidents as escalated from Monitoring tools and Business Development Team
- Implement and follow security guidelines, both policy and technology to protect our data
- Identify root cause for issues and develop long term solutions to fix recurring issues and Documenting it
- Strong in performing production operation activities even in night times if required
- Ability to automate [Scripts] recurring tasks to increase velocity and quality.
- Experience in working with Linux Server, DevOps tools, Orchestration tools
- Linux, RedHat, AWS, CompTIA+, and any other certification is a value add
II-A Experience Required in DevOps Aspects:
- Length of Experience: Minimum 1-4 years experience
- Nature of Experience:
- Experience in Cloud deployments, Linux administration[ Kernel Tuning is a value add ], Linux clustering, AWS, virtualization, and networking concepts [ Azure, GCP value add ]
- Experience in deployment solutions CI/CD like Jenkins or GitHub Actions [ Release Management is a value add ]
- Hands-on experience in any of the configuration management IaC tools like Ansible, Terraform, CloudFormation [ Chef & Puppet is a value add ]
- Administration, Configuring and utilizing Monitoring and Alerting tools like Prometheus, Grafana, Loki, ELK, Zabbix, Datadog, etc
- Experience with Containerization, Orchestration tools like Docker, Kubernetes [ Docker swarm is a value add ]
- Experience in Database applications like, PostgreSQL, MongoDB & MySQL [DataOps]
- Good at Version Control & source code management systems like GitHub, GIT
- Experience in Serverless [ Lambda/GCP cloud function/Azure function ]
- Experience in Web Server Nginx, and Apache
- Knowledge in Redis, RabbitMQ, ELK, REST API [ MLOps Tools is a value add ]
- Knowledge in Puma, Unicorn, Gunicorn & Yarn
- Hands-on VMWare ESXi/Xencenter deployments
- Experience in Implementing and troubleshooting TCP/IP networks, VPN, Load Balancing & Web application firewalls
- Deploying, Configuring, and Maintaining Linux server systems ON premises and OFF premises
- Code Quality like SonarQube is a value add
- Test Automation like Selenium, JMeter, JUnit is a value add
- Experience in Heroku and OpenStack is a value add
II-B Experience Required in SecOps Aspects:
- Length of Experience: Minimum 1 year of experience
- Nature of Experience:
- Experience in Identifying Inbound and Outbound Threats and resolving it
- Knowledge in CVE & applying the patches for OS, Ruby gems, Node, and Python packages
- Documenting of the Security fix for future use
- Establish cross-team collaboration with security built into the software development lifecycle
- Forensics and Root Cause Analysis skills are mandatory
- Weekly Sanity Checks of the ON prem and OFF prem environment.
III Skill Set & Personality Traits required:
- An understanding of programming languages such as Ruby, NodeJS, ReactJS, Perl, Java, Python, and PHP
- Good written and verbal communication skills to facilitate efficient and effective interaction with peers, partners, vendors, and customers
- Independent, self-motivated team player Meticulous and methodical in creating solutions
- Should be a quick learner
- Agile Methodology
- Should have the ability to maintain a high level of alertness and attention to detail for extended periods
- Installation and administration of various software for the enterprise
IV Interview focus areas:
- AWS, GCP & Azure
- Monitoring, Containerization, Orchestration, & Networking
- Problem-solving, Scripting [ Bash / Python ], systems administration, and troubleshooting scenarios
- Cloud functions & Lambda
- Infrastructure as a Code [ Ansible or Terraform or CloudFormation]
- Startup Culture fit, Agility, Persistence, Communication'
V Location: Chennai, India
- Working on scalability, maintainability and reliability of company's products.
- Working with clients to solve their day-to-day challenges, moving manual processes to automation.
- Keeping systems reliable and gauging the effort it takes to reach there.
- Understanding Juxtapose tools and technologies to choose x over y.
- Understanding Infrastructure as a Code and applying software design principles to it.
- Automating tedious work using your favourite scripting languages.
- Taking code from the local system to production by implementing Continuous Integration and Delivery principles.
What you need to have:
- Worked with any one of the programming languages like Go, Python, Java, Ruby.
- Work experience with public cloud providers like AWS, GCP or Azure.
- Understanding of Linux systems and Containers
- Meticulous in creating and following runbooks and checklists
- Microservices experience and use of orchestration tools like Kubernetes/Nomad.
- Understanding of Computer Networking fundamentals like TCP, UDP.
- Strong bash scripting skills.
Synapsica is a series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis.
Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls
Your Roles and Responsibilities
The Lead DevOps Engineer will be responsible for the management, monitoring and operation of our applications and services in production. The DevOps Engineer will be a hands-on person who can work independently or with minimal guidance and has the ability to drive the team’s deliverables by mentoring and guiding junior team members. You will work with the existing teams very closely and build on top of tools like Kubernetes, Docker and Terraform and support our numerous polyglot services.
Introducing a strong DevOps ethic into the rest of the team is crucial, and we expect you to lead the team on best practices in deployment, monitoring, and tooling. You'll work collaboratively with software engineering to deploy and operate our systems, help automate and streamline our operations and processes, build and maintain tools for deployment, monitoring, and operations and troubleshoot and resolve issues in our development, test and production environments. The position is based in our Bangalore office.
- Providing strategies and creating pathways in support of product initiatives in DevOps and automation, with a focus on the design of systems and services that run on cloud platforms.
- Optimizations and execution of the CI/CD pipelines of multiple products and timely promotion of the releases to production environments
- Ensuring that mission critical applications are deployed and optimised for high availability, security & privacy compliance and disaster recovery.
- Strategize, implement and verify secure coding techniques, integrate code security tools for Continuous Integration
- Ensure analysis, efficiency, responsiveness, scalability and cross-platform compatibility of applications through captured metrics, testing frameworks, and debugging methodologies.
- Technical documentation through all stages of development
- Establish strong relationships, and proactively communicate, with team members as well as individuals across the organisation
- Minimum of 6 years of experience on Devops tools.
- Working experience with Linux, container orchestration and management technologies (Docker, Kubernetes, EKS, ECS …).
- Hands-on experience with "infrastructure as code" solutions (Cloudformation, Terraform, Ansible etc).
- Background of building and maintaining CI/CD pipelines (Gitlab-CI, Jenkins, CircleCI, Github actions etc).
- Experience with the Hashicorp stack (Vault, Packer, Nomad etc).
- Hands-on experience in building and maintaining monitoring/logging/alerting stacks (ELK stack, Prometheus stack, Grafana etc).
- Devops mindset and experience with Agile / SCRUM Methodology
- Basic knowledge of Storage , Databases (SQL and noSQL)
- Good understanding of networking technologies, HAProxy, firewalling and security.
- Experience in Security vulnerability scans and remediation
- Experience in API security and credentials management
- Worked on Microservice configurations across dev/test/prod environments
- Ability to quickly adapt new languages and technologies
- A strong team player attitude with excellent communication skills.
- Very high sense of ownership.
- Deep interest and passion for technology
- Ability to plan projects, execute them and meet the deadline
- Excellent verbal and written English communication.
This role requires a balance between hands-on infrastructure-as-code deployments as well as involvement in operational architecture and technology advocacy initiatives across the Numerator portfolio.
- Must have a minimum of 3 years of experience in managing AWS resources and automating CI/CD pipelines.
- Strong scripting skills in PowerShell, Python or Bash be able to build and administer CI/CD pipelines.
- Knowledge of infrastructure tools like Cloud Formation, Terraform, Ansible.
- Experience with microservices and/or event-driven architecture.
- Experience using containerization technologies (Docker, ECS, Kubernetes, Mesos or Vagrant).
- Strong practical Windows and Linux system administration skills in the cloud.
- Understanding of DNS, NFS, TCP/IP and other protocols.
- Knowledge of secure SDLC, OWASP top 10 and CWE/SANS top 25.
- Deep understanding of Web Sockets and their functioning. Hands on experience of ElasticCache, Redis, ECS or EKS. Installation, configuration and management of Apache or Nginx web server, Apache/Tomcat Application Server, configure SSL certificates, setup reverse proxy.
- Exposure to RDBMS (MySQL, SQL Server, Aurora, etc.) is a plus.
- Exposure to programming languages like JAVA, PHP, SQL is a plus.
- AWS Developer or AWS SysOps Administrator certification is a plus.
- AWS Solutions Architect Certification experience is a plus.
- Experience building Blue/Green, Canary or other zero down time deployment strategies, advanced understanding of VPC, EC2 Route53 IAM, Lambda is a plus.
Cloud Software Engineer
Notice Period: 45 days / Immediate Joining
Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure. We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms. Our engagement service is built on the DevOps standard practice and SRE model.
We offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.
Roles and Responsibilities
· A wide variety of engineering projects including data visualization, web services, data engineering, web-portals, SDKs, and integrations in numerous languages, frameworks, and clouds platforms
· Apply continuous delivery practices to deliver high-quality software and value as early as possible.
· Work in collaborative teams to build new experiences
· Participate in the entire cycle of software consulting and delivery from ideation to deployment
· Integrating multiple software products across cloud and hybrid environments
· Developing processes and procedures for software applications migration to the cloud, as well as managed services in the cloud
· Migrating existing on-premises software applications to cloud leveraging a structured method and best practices
Desired Candidate Profile : *** freshers can also apply ***
· 2+years of experience with 1 or more development languages such as Java, Python, or Spark.
· 1 year + of experience with private/public/hybrid cloud model design, implementation, orchestration, and support.
· Certification or any training's completion of any one of the cloud environments like AWS, GCP, Azure, Oracle Cloud, and Digital Ocean.
· Strong problem-solvers who are comfortable in unfamiliar situations, and can view challenges through multiple perspectives
· Driven to develop technical skills for oneself and team-mates
· Hands-on experience with cloud computing and/or traditional enterprise datacentre technologies, i.e., network, compute, storage, and virtualization.
· Possess at least one cloud-related certification from AWS, Azure, or equivalent
· Ability to write high-quality, well-tested code and comfort with Object-Oriented or functional programming patterns
· Past experience quickly learning new languages and frameworks
· Ability to work with a high degree of autonomy and self-direction
- Building and setting up new development tools and infrastructure
- Understanding the needs of stakeholders and conveying this to developers
- Working on ways to automate and improve development and release processes
- Ensuring that systems are safe and secure against cybersecurity threats
- Identifying technical problems and developing software updates and 'fixes'
- Working with software developers and software engineers to ensure that development follows established processes and works as intended
Daily and Monthly Responsibilities :
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer experience
- Develop software to integrate with internal back end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Skills and Qualifications :
- Bachelors in Computer Science, Engineering or relevant field
- Experience as a DevOps Engineer or similar software engineering role
- Proficient with git and git workflows
- Good knowledge of Python
- Working knowledge of databases such as Mysql,Postgres and SQL
- Problem solving attitude
- Collaborative team spirit
- Detail knowledge of Linux systems (Ubuntu)
- Proficient in AWS console and should have handled the infrastructure of any product (Including dev and prod environments)
Mandatory hands on experience in the following :
- Python based application deployment and maintenance
- NGINX web server
- AWS modules EC2, VPC, EBS, S3
- IAM setup
- Database configurations MySQL, PostgreSQL
- Linux flavoured OS
- Instance/Disaster management
We're building an easy way for product teams to create and experiment with game-like experiences - without a developer. In short - CustomerGlu is a low code interactive engagement platform.
We're backed by Techstars and top-notch VCs from Silicon Valley and India, read a recent Yourstory article here.
We're growing at a fast pace - 1 new customer going live every week, with piling up companies that are in the integration.
Currently, we're looking to onboard a generalist engineer that can work on the cloud architecture, DevOps, security and pick up the challenge of scaling a global SaaS from India.
We're headquartered in the US with a fully owned subsidiary in India.
- Take ownership of the complete infrastructure development and maintenance
- Own, innovate and maintain processes for good DevOps practices within the company
- Educate the team and apply OWASP security principles in software design
- Optimize cost and set alerts across infrastructure
- Build good logging and error tracing design patterns
- Build and maintain auto-scaling infrastructure
- Design systems to meet company SLA and SLO
- Experience in building multi-region/datacenter based cloud infrastructure
- Experience in building highly available deployments (3 Nines or above)
- Experience with building and maintaining CI/CD Pipelines
- Experience in container-based scalable deployments and orchestration tools such as Kubernetes/ECS or others
- Experience with MongoDB Compatible databases. MongoDB, CosmosDB, DocumentDB
- Experience in designing VPCs, multiregional networking, and best practices in ACLs
- Experience with OpenSSL and OpenVPN deployments
- Experience with Distributed data stores like Cassandra, Dynamo DB
- Good understanding of Single Point of failures in distributed system Infra and best practices
- Good understanding of GDPR and data storage design for multi-region systems
Good to have
- Experience with Terraform, Ansible, Jenkins, or similar tools
- Experience with Data streaming infrastructure such as Kafka/Kinesis etc
- Good understanding of security best practices for Infra, databases, and application
- In-depth understanding of OWASP Top 10 vulnerabilities and mitigation strategies
- Proficient in Application Gateways, Loadbalancers and Nginx or other web server configurations
- Experience with configuring Serverless offerings such as lambda or azure functions
- Experience with Spark deployments (Standalone or managed like EMR)
- Broad ownership of the software that enables 1000s of customers globally
- Build leadership of working with a growing engineering team
- Build a cutting edge much-needed product