Introduction
http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis.
Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls
Your Roles and Responsibilities
The Lead DevOps Engineer will be responsible for the management, monitoring and operation of our applications and services in production. The DevOps Engineer will be a hands-on person who can work independently or with minimal guidance and has the ability to drive the team’s deliverables by mentoring and guiding junior team members. You will work with the existing teams very closely and build on top of tools like Kubernetes, Docker and Terraform and support our numerous polyglot services.
Introducing a strong DevOps ethic into the rest of the team is crucial, and we expect you to lead the team on best practices in deployment, monitoring, and tooling. You'll work collaboratively with software engineering to deploy and operate our systems, help automate and streamline our operations and processes, build and maintain tools for deployment, monitoring, and operations and troubleshoot and resolve issues in our development, test and production environments. The position is based in our Bangalore office.
Primary Responsibilities
- Providing strategies and creating pathways in support of product initiatives in DevOps and automation, with a focus on the design of systems and services that run on cloud platforms.
- Optimizations and execution of the CI/CD pipelines of multiple products and timely promotion of the releases to production environments
- Ensuring that mission critical applications are deployed and optimised for high availability, security & privacy compliance and disaster recovery.
- Strategize, implement and verify secure coding techniques, integrate code security tools for Continuous Integration
- Ensure analysis, efficiency, responsiveness, scalability and cross-platform compatibility of applications through captured metrics, testing frameworks, and debugging methodologies.
- Technical documentation through all stages of development
- Establish strong relationships, and proactively communicate, with team members as well as individuals across the organisation
Requirements
- Minimum of 6 years of experience on Devops tools.
- Working experience with Linux, container orchestration and management technologies (Docker, Kubernetes, EKS, ECS …).
- Hands-on experience with "infrastructure as code" solutions (Cloudformation, Terraform, Ansible etc).
- Background of building and maintaining CI/CD pipelines (Gitlab-CI, Jenkins, CircleCI, Github actions etc).
- Experience with the Hashicorp stack (Vault, Packer, Nomad etc).
- Hands-on experience in building and maintaining monitoring/logging/alerting stacks (ELK stack, Prometheus stack, Grafana etc).
- Devops mindset and experience with Agile / SCRUM Methodology
- Basic knowledge of Storage , Databases (SQL and noSQL)
- Good understanding of networking technologies, HAProxy, firewalling and security.
- Experience in Security vulnerability scans and remediation
- Experience in API security and credentials management
- Worked on Microservice configurations across dev/test/prod environments
- Ability to quickly adapt new languages and technologies
- A strong team player attitude with excellent communication skills.
- Very high sense of ownership.
- Deep interest and passion for technology
- Ability to plan projects, execute them and meet the deadline
- Excellent verbal and written English communication.
About Synapsica Technologies Pvt Ltd
About
Tech stack
Company video
Candid answers by the company
Photos
Connect with the team
Similar jobs
Skills We Require:- Dev Ops, AWS Admin, terraform, Infrastructure as a Code
SUMMARY:-
- Implement integrations requested by customers
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Have good hands on experience on Dev Ops, AWS Admin, terraform, Infrastructure as a Code
Have knowledge on EC2, Lambda, S3, ELB, VPC, IAM, Cloud Watch, Centos, Server Hardening
Ability to understand business requirements and translate them into technical requirements
A knack for benchmarking and optimizationJob Description: DevOps Engineer
About Hyno:
Hyno Technologies is a unique blend of top-notch designers and world-class developers for new-age product development. Within the last 2 years we have collaborated with 32 young startups from India, US and EU to to find the optimum solution to their complex business problems. We have helped them to address the issues of scalability and optimisation through the use of technology and minimal cost. To us any new challenge is an opportunity.
As part of Hyno’s expansion plans,Hyno, in partnership with Sparity, is seeking an experienced DevOps Engineer to join our dynamic team. As a DevOps Engineer, you will play a crucial role in enhancing our software development processes, optimising system infrastructure, and ensuring the seamless deployment of applications. If you are passionate about leveraging cutting-edge technologies to drive efficiency, reliability, and scalability in software development, this is the perfect opportunity for you.
Position: DevOps Engineer
Experience: 5-7 years
Responsibilities:
- Collaborate with cross-functional teams to design, develop, and implement CI/CD pipelines for automated application deployment, testing, and monitoring.
- Manage and maintain cloud infrastructure using tools like AWS, Azure, or GCP, ensuring scalability, security, and high availability.
- Develop and implement infrastructure as code (IaC) using tools like Terraform or CloudFormation to automate the provisioning and management of resources.
- Constantly evaluate continuous integration and continuous deployment solutions as the industry evolves, and develop standardised best practices.
- Work closely with development teams to provide support and guidance in building applications with a focus on scalability, reliability, and security.
- Perform regular security assessments and implement best practices for securing the entire development and deployment pipeline.
- Troubleshoot and resolve issues related to infrastructure, deployment, and application performance in a timely manner.
- Follow regulatory and ISO 13485 requirements.
- Stay updated with industry trends and emerging technologies in the DevOps and cloud space, and proactively suggest improvements to current processes.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or related field (or equivalent work experience).
- Minimum of 5 years of hands-on experience in DevOps, system administration, or related roles.
- Solid understanding of containerization technologies (Docker, Kubernetes) and orchestration tools
- Strong experience with cloud platforms such as AWS, Azure, or GCP, including services like ECS, S3, RDS, and more.
- Proficiency in at least one programming/scripting language such as Python, Bash, or PowerShell..
- Demonstrated experience in building and maintaining CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or CircleCI.
- Familiarity with configuration management tools like Ansible, Puppet, or Chef.
- Experience with container (Docker, ECS, EKS), serverless (Lambda), and Virtual Machine (VMware, KVM) architectures.
- Experience with infrastructure as code (IaC) tools like Terraform, CloudFormation, or Pulumi.
- Strong knowledge of monitoring and logging tools such as Prometheus, ELK stack, or Splunk.
- Excellent problem-solving skills and the ability to work effectively in a fast-paced, collaborative environment.
- Strong communication skills and the ability to work independently as well as in a team.
Nice to Have:
- Relevant certifications such as AWS Certified DevOps Engineer, Azure DevOps Engineer, Certified Kubernetes Administrator (CKA), etc.
- Experience with microservices architecture and serverless computing.
Soft Skills:
- Excellent written and verbal communication skills.
- Ability to manage conflict effectively.
- Ability to adapt and be productive in a dynamic environment.
- Strong communication and collaboration skills supporting multiple stakeholders and business operations.
- Self-starter, self-managed, and a team player.
Join us in shaping the future of DevOps at Hyno in collaboration with Sparity. If you are a highly motivated and skilled DevOps Engineer, eager to make an impact in a remote setting, we'd love to hear from you.
Objectives :
- Building and setting up new development tools and infrastructure
- Working on ways to automate and improve development and release processes
- Testing code written by others and analyzing results
- Ensuring that systems are safe and secure against cybersecurity threats
- Identifying technical problems and developing software updates and ‘fixes’
- Working with software developers and software engineers to ensure that development follows established processes and works as intended
- Planning out projects and being involved in project management decisions
Daily and Monthly Responsibilities :
- Deploy updates and fixes
- Build tools to reduce occurrences of errors and improve customer experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Skills and Qualifications :
- Degree in Computer Science or Software Engineering or BSc in Computer Science, Engineering or relevant field
- 3+ years of experience as a DevOps Engineer or similar software engineering role
- Proficient with git and git workflows
- Good logical skills and knowledge of programming concepts(OOPS,Data Structures)
- Working knowledge of databases and SQL
- Problem-solving attitude
- Collaborative team spirit
We are hiring DevOps Engineers for luxury-commerce platform that is well-funded and is now ready for its next level of growth. It is backed by reputed investors and is already a leader in its space. The focus for the coming years will be heavily on scaling the platform through technology. Market-driven competitive salary or the right candidate
Job Title : DevOps System Engineer
Responsibilities:
- Implementing, maintaining, monitoring, and supporting the IT infrastructure
- Writing scripts for service quality analysis, monitoring, and operation
- Designing procedures for system troubleshooting and maintenance
- Investigating and resolving technical issues by deploying updates/fixes
- Implementing automation tools and frameworks for automatic code deployment (CI/CD)
- Quality control and management of the codebase
- Ownership of infrastructure and deployments in various environments
Requirements:
- Degree in Computer Science, Engineering or a related field
- Prior experience as a DevOps engineer
- Good knowledge of various operating systems - Linux, Windows, Mac.
- Good Knowledge of Networking, virtualization, Containerization technologies.
- Familiarity with software release management and deployment (Git, CI/CD)
- Familiarity with one or more popular cloud platforms such as AWS, Azure, etc.
- Solid understanding of DevOps principles and practices
- Knowledge of systems and platforms security
- Good problem-solving skills and attention to detail
Skills: Linux, Networking, Docker, Kubernetes, AWS/Azure, Git/GitHub, Jenkins, Selenium, Puppet/Chef/Ansible, Nagios
Experience : 5+ years
Location: Prabhadevi, Mumbai
Interested candidates can apply with their updated profiles.
Regards,
HR Team
Aza Fashions
- Must have a minimum of 3 years of experience in managing AWS resources and automating CI/CD pipelines.
- Strong scripting skills in PowerShell, Python or Bash be able to build and administer CI/CD pipelines.
- Knowledge of infrastructure tools like Cloud Formation, Terraform, Ansible.
- Experience with microservices and/or event-driven architecture.
- Experience using containerization technologies (Docker, ECS, Kubernetes, Mesos or Vagrant).
- Strong practical Windows and Linux system administration skills in the cloud.
- Understanding of DNS, NFS, TCP/IP and other protocols.
- Knowledge of secure SDLC, OWASP top 10 and CWE/SANS top 25.
- Deep understanding of Web Sockets and their functioning. Hands on experience of ElasticCache, Redis, ECS or EKS. Installation, configuration and management of Apache or Nginx web server, Apache/Tomcat Application Server, configure SSL certificates, setup reverse proxy.
- Exposure to RDBMS (MySQL, SQL Server, Aurora, etc.) is a plus.
- Exposure to programming languages like JAVA, PHP, SQL is a plus.
- AWS Developer or AWS SysOps Administrator certification is a plus.
- AWS Solutions Architect Certification experience is a plus.
- Experience building Blue/Green, Canary or other zero down time deployment strategies, advanced understanding of VPC, EC2 Route53 IAM, Lambda is a plus.
Job description
The role requires you to design development pipelines from the ground up, Creation of Docker Files, design and operate highly available systems in AWS Cloud environments. Also involves Configuration Management, Web Services Architectures, DevOps Implementation, Database management, Backups, and Monitoring.
Key responsibility area
- Ensure reliable operation of CI/CD pipelines
- Orchestrate the provisioning, load balancing, configuration, monitoring and billing of resources in the cloud environment in a highly automated manner
- Logging, metrics and alerting management.
- Creation of Bash/Python scripts for automation
- Performing root cause analysis for production errors.
Requirement
- 2 years experience as Team Lead.
- Good Command on kubernetes.
- Proficient in Linux Commands line and troubleshooting.
- Proficient in AWS Services. Deployment, Monitoring and troubleshooting applications in AWS.
- Hands-on experience with CI tooling preferably with Jenkins.
- Proficient in deployment using Ansible.
- Knowledge of infrastructure management tools (Infrastructure as cloud) such as terraform, AWS cloudformation etc.
- Proficient in deployment of applications behind load balancers and proxy servers such as nginx, apache.
- Scripting languages: Bash, Python, Groovy.
- Experience with Logging, Monitoring, and Alerting tools like ELK(Elastic-search, Logstash, Kibana), Nagios. Graylog, splunk Prometheus, Grafana is a plus.
Must Have:
Linux, CI/CD(Jenkin), AWS, Scripting(Bash,shell Python, Go), Ngnix, Docker.
Good to have
Configuration Management(Ansible or similar tool), Logging tool( ELK or similar), Monitoring tool(Ngios or similar), IaC(Terraform, cloudformation).
Karkinos Healthcare Pvt. Ltd.
The fundamental principle of Karkinos healthcare is democratization of cancer care in a participatory fashion with existing health providers, researchers and technologists. Our vision is to provide millions of cancer patients with affordable and effective treatments and have India become a leader in oncology research. Karkinos will be with the patient every step of the way, to advise them, connect them to the best specialists, and to coordinate their care.
Karkinos has an eclectic founding team with strong technology, healthcare and finance experience, and a panel of eminent clinical advisors in India and abroad.
Roles and Responsibilities:
- Critical role that involves in setting up and owning the dev, staging, and production infrastructure for the platform that uses micro services, data warehouses and a datalake.
- Demonstrate technical leadership with incident handling and troubleshooting.
- Provide software delivery operations and application release management support, including scripting, automated build and deployment processing and process reengineering.
- Build automated deployments for consistent software releases with zero downtime
- Deploy new modules, upgrades and fixes to the production environment.
- Participate in the development of contingency plans including reliable backup and restore procedures.
- Participate in the development of the end to end CI / CD process and follow through with other team members to ensure high quality and predictable delivery
- Participate in development of CI / CD processes
- Work on implementing DevSecOps and GitOps practices
- Work with the Engineering team to integrate more complex testing into a containerized pipeline to ensure minimal regressions
- Build platform tools that rest of the engineering teams can use.
Apply only if you have:
- 2+ years of software development/technical support experience.
- 1+ years of software development, operations experience deploying and maintaining multi-tiered infrastructure and applications at scale.
- 2+ years of experience in public cloud services: AWS (VPC, EC2, ECS, Lambda, Redshift, S3, API Gateway) or GCP (Kubernetes Engine, Cloud SQL, Cloud Storage, BIG Query, API Gateway, Container Registry) - preferably in GCP.
- Experience managing infra for distributed NoSQL system (Kafka/MongoDB), Containers, Micro services, deployment and service orchestration using Kubernetes.
- Experience and a god understanding of Kubernetes, Service Mesh (Istio preferred), API Gateways, Network proxies, etc.
- Experience in setting up infra for central monitoring of infrastructure, ability to debug, trace
- Experience and deep understanding of Cloud Networking and Security
- Experience in Continuous Integration and Delivery (Jenkins / Maven Github/Gitlab).
- Strong scripting language knowledge, such as Python, Shell.
- Experience in Agile development methodologies and release management techniques.
- Excellent analytical and troubleshooting.
- Ability to continuously learn and make decisions with minimal supervision. You understand that making mistakes means that you are learning.
Interested Applicants can share their resume at sajal.somani[AT]karkinos[DOT]in with subject as "DevOps Engineer".
Our Client is an IT infrastructure services company, focused and specialized in delivering solutions and services on Microsoft products and technologies. They are a Microsoft partner and cloud solution provider. Our Client's objective is to help small, mid-sized as well as global enterprises to transform their business by using innovation in IT, adapting to the latest technologies and using IT as an enabler for business to meet business goals and continuous growth.
With focused and experienced management and a strong team of IT Infrastructure professionals, they are adding value by making IT Infrastructure a robust, agile, secure and cost-effective service to the business. As an independent IT Infrastructure company, they provide their clients with unbiased advice on how to successfully implement and manage technology to complement their business requirements.
- Working closely with other engineers and administrators
- Learning intimate knowledge of how best to customize the services available on various cloud platforms to help us become more secure and efficient.
- Assessing client requirements and coming up with costing for the sales team
- Planning and designing client infrastructure on Microsoft Azure and AWS
- Setting up alerts and monitor the health of cloud resources
- Handling the day-to-day management of clients’ cloud-based solutions Implementing security and protecting Identities
- Diagnosing and troubleshooting technical issues relating to Microsoft Azure and AWS
- Helping customers successfully deploy and implement cloud computing solutions
- Resolving technical support tickets via telephone, chat, email and sometimes in-person
- Keeping self and team updated with new cloud services offerings from Microsoft, Amazon & Google
- Staying current with industry trends, making recommendations as needed to help the company excel
What you need to have:
- Experience in cloud-based tech
- This position requires excellent written and verbal communication skills and negotiation
- Should have working knowledge of Microsoft Azure Calculator and AWS Calculator
- A clear understanding of core Cloud Computing services
- Knowledge of various computer services on Microsoft Azure and AWS
- Knowledge of various storage services on Microsoft Azure and AWS
- Knowledge of log collecting services available with Microsoft Azure and AWS
- Experience of working with popular operating systems such as Linux & Windows
- Experience of computer networks
- Experience of computer technologies like Active Directory, network protocols & subnetting
- Experience in automating day to day tasks using PowerShell scripting
- Confidence in own abilities
- Knowledgeable within this subject area and a thought leader
- Fast assimilator of information
- Imaginative problem solver
- Structured organizer
- Strong relationship building skills
- Strong analytical & numeracy skills
- Ability to use initiative and work under pressure, prioritizing to meet deadlines
- Driven, leading on initiatives, being committed to the role, and delivering on objectives and deadlines
- Service Orientation, demonstrable commitment to customer service
Technical Experience/Knowledge Needed :
- Cloud-hosted services environment.
- Proven ability to work in a Cloud-based environment.
- Ability to manage and maintain Cloud Infrastructure on AWS
- Must have strong experience in technologies such as Dockers, Kubernetes, Functions, etc.
- Knowledge in orchestration tools Ansible
- Experience with ELK Stack
- Strong knowledge in Micro Services, Container-based architecture and the corresponding deployment tools and techniques.
- Hands-on knowledge of implementing multi-staged CI / CD with tools like Jenkins and Git.
- Sound knowledge on tools like Kibana, Kafka, Grafana, Instana and so on.
- Proficient in bash Scripting Languages.
- Must have in-depth knowledge of Clustering, Load Balancing, High Availability and Disaster Recovery, Auto Scaling, etc.
-
AWS Certified Solutions Architect or/and Linux System Administrator
- Strong ability to work independently on complex issues
- Collaborate efficiently with internal experts to resolve customer issues quickly
- No objection to working night shifts as the production support team works on 24*7 basis. Hence, rotational shifts will be assigned to the candidates weekly to get equal opportunity to work in a day and night shifts. But if you get candidates willing to work the night shift on a need basis, discuss with us.
- Early Joining
- Willingness to work in Delhi NCR
Job Brief:
We are looking for candidates that have experience in development and have performed CI/CD based projects. Should have a good hands-on Jenkins Master-Slave architecture, used AWS native services like CodeCommit, CodeBuild, CodeDeploy and CodePipeline. Should have experience in setting up cross platform CI/CD pipelines which can be across different cloud platforms or on-premise and cloud platform.
Job Location:
Pune.
Job Description:
- Hands on with AWS (Amazon Web Services) Cloud with DevOps services and CloudFormation.
- Experience interacting with customer.
- Excellent communication.
- Hands-on in creating and managing Jenkins job, Groovy scripting.
- Experience in setting up Cloud Agnostic and Cloud Native CI/CD Pipelines.
- Experience in Maven.
- Experience in scripting languages like Bash, Powershell, Python.
- Experience in automation tools like Terraform, Ansible, Chef, Puppet.
- Excellent troubleshooting skills.
- Experience in Docker and Kuberneties with creating docker files.
- Hands on with version control systems like GitHub, Gitlab, TFS, BitBucket, etc.