
Lightning job by Cutshort ⚡:
As part of this feature, you can expect status updates about your application and replies within 48 hours (once the screening questions are answered)
About Us:
Brine is a DEX built to help you trade trustless and gasless, without compromising on speed or ease of use. Our goal is to create a trading platform that combines the best of decentralized and centralized exchanges to give users a superior experience. The focus is on security and efficiency at the lowest trading fee
Every team needs a great foundation, and we have ours - four inspired minds that came together to realize their vision of founding a company that made the world a better place, one technological step at a time. They are backed by a team of earnest, hardworking individuals who are integral to the company's success.
Objectives:-
Looking for a candidate with 2.5+ years of experience in the relevant field of DevOps with a CKA certificate(mandatory) who can work on the following objectives,
● To build and set up new development tools and infrastructure.
● To understand the needs of operations teams and convey this to developers.
● To work on ways to automate and improve development and release processes.
● To identify technical problems.
● To work with developers by which one can ensure the smooth running of the DevOps that is implemented in the company.
● To plan out projects and be involved in project management decisions.
● To efficiently work on the feedback provided by fellow developers and work on Setup alerts and dashboards.
● Build instrumentation for improving our development & deployment cycle.
What skills we are looking for:
● Experience as a DevOps Engineer.
● Good knowledge of CI/CD tools like Jenkins
● Proficient with Python/Golang
● Good knowledge of AWS, Docker, and Kubernetes.
● Excellent Problem-solving skills
● Excellent communication skills.
● Collaborative team spirit.
● Ownership/leadership skills.
Our Tech Stack
● Backend: Django, Ruby, Go
● Frontend: React + React Native -
● Databases: MySQL, Postgresql, Redis
● Infrastructure: AWS

About Brine Fi
The decentralized order-book exchange for traders crafted by traders


Brine helps you trade trustless without having to compromise on fees, swiftness, experience & liquidity. Only pay for what you trade 💰
Similar jobs
What the role needs
● Review of current DevOps infrastructure & redefine code merging strategy as per product roll out objectives
● Define deploy frequency strategy based on product roadmap document and ongoing product market fit relate tweaks and changes
● Architect benchmark docker configurations based on planned stack
● Establish uniformity of environment across developer machine to multiple production environments
● Plan & execute test automation infrastructure
● Setup automated stress testing environment
● Plan and execute logging & stack trace tools
● Review DevOps orchestration tools & choices
● Coordination with external data centers and AWS in the event of provisioning, outages or maintenance.
Requirements
● Extensive experience with AWS cloud infrastructure deployment and monitoring
● Advanced knowledge of programming languages such as Python and golang, and writing code and scripts
● Experience with Infrastructure as code & devops management tools - Terraform, Packer for devops asset management for monitoring, infrastructure cost estimations, and Infrastructure version management
● Configure and manage data sources like MySQL, MongoDB, Elasticsearch, Redis, Cassandra, Hadoop, etc
● Experience with network, infrastructure and OWASP security standards
● Experience with web server configurations - Nginx, HAProxy, SSL configurations with AWS, understanding & management of sub-domain based product rollout for clients .
● Experience with deployment and monitoring of event streaming & distributing technologies and tools - Kafka, RabbitMQ, NATS.io, socket.io
● Understanding & experience of Disaster Recovery Plan execution
● Working with other senior team members to devise and execute strategies for data backup and storage
● Be aware of current CVEs, potential attack vectors, and vulnerabilities, and apply patches as soon as possible
● Handle incident responses, troubleshooting and fixes for various services
We are an on-demand E-commerce technology and Services Company and a tech-enabled 3PL (third party logistics). We unlocks ecommerce for companies by managing the entire operations lifecycle:
Sell, Fulfil & Reconcile.
Using us, companies can: -
• Store their inventory in our fulfilment centers (FCs)
• Sell their products on multiple sales channels (online marketplaces like Amazon, Flipkart, and their own website)
• Get their orders processed within a defined SLA
• Reconcile payments against the sales company combines infrastructure and dedicated experts to give brands: accountability, peace of mind, and control over the ecommerce journey.
The company is working on a remarkable concept for running an E-commerce business- starting from establishing an online presence for many enterprises. It offers a combination of products and services to create a comprehensive platform and manage all aspects of running a brand online, including the development of an exclusive web store, handling logistics, integrating all marketplaces and so on.
Who are we looking for?
We are looking for a skilled and passionate DevOps Engineer to join our Centre of Excellence to build and scale effective software solutions for our Ecommerce Domain.
Wondering what your Responsibilities would be?
• Building and setting up new development tools and infrastructure
• Provide full support to the software development teams to deploy, run and roll out new services and new capabilities in Cloud infrastructure
• Implement CI/CD and DevOps best practices for software application teams and assist in executing the integration and operation processes
• Build proactive monitoring and alerting infrastructure services to support operations and system health
• Be hands-on in developing prototypes and conducting Proof of Concepts
• Work in an agile, collaborative environment, partnering with other engineers to bring new solutions to the table
• Join the DevOps Chapter where you’ll have the opportunity to investigate and share information about technologies within the DevOps Engineering Community
What Makes you Eligible?
• Bachelor’s Degree or higher in Computer Science or Software Engineering with appropriate experience
• Minimum of 1 year of proven experience as DevOps Engineer
• Experience in working with a DevOps culture, following Agile Software Development methodologies of Scrum
• Proven experience in source code management tools like Bitbucket and Git
• Solid experience in CI/CD pipelines like Jenkins
• Shown ability with configuration management tools (e.g., Terraform, Ansible, Docker and Kubernetes) and repository tools like Artifactory
• Experience in Cloud architecture & provisioning
• Knowledge in Programming / Querying NoSQL databases
• Teamwork skills with a problem-solving attitude
Wolken Software provides a suite of AI-enabled, SaaS 2.0 cloud-native applications for Customer Service and Enterprise Solutions namely Wolken Service Desk, Wolken's IT Service Management, and Wolken's HR Case Management. We have replaced incumbents like Salesforce, ServiceNow Zendesk, etc. at various Fortune 500 and Fortune 1000 companies.
JD:
AWS: 7-10 years experience with using a broad range of AWS technologies (e.g. EC2, RDS, ELB, S3, VPC, IAM, CloudWatch) to develop and maintain AWS-based cloud solutions, with emphasis on best practice cloud security.
DevOps:
Solid experience as a DevOps Engineer in a 24x7 uptime AWS environment, including automation experience with configuration management tools.
Strong scripting skills and automation skills.
Expertise in Linux system administration.
Beneficial to have
- Basic DB administration experience (MySQL)
- Working knowledge of some of the major open-source web containers & servers like apache, tomcat and Nginx.
- Understanding network topologies and common network protocols and services (DNS, HTTP(S), SSH, FTP, SMTP).
- Experience in Docker, Ansible & Python.
Role & Responsibilities:
- Deploying, automating, maintaining and managing AWS cloud-based production system, to ensure the availability, performance, scalability and security of production systems.
- Build, release and configuration management of production systems.
- System troubleshooting and problem-solving across platform and application domains.
- Ensuring critical system security using best-in-class cloud security solutions.
We are looking for an experienced DevOps engineer that will help our team establish DevOps practice. You will work closely with the technical lead and CTO to identify and establish DevOps practices in the company.
You will establish configuration management, automate our infrastructure, implement continuous integration, and train the team in DevOps best practices to achieve a continuously deployable system.
You will be part of a continually growing team. You will have the chance to be creative and think of new ideas. You will also be able to work on open-source projects, ranging from small to large.
What you’ll be doing – your role
- Improve CI/CD tooling
- Implement and improve monitoring and alerting
- Help support daily operations through the use of automation and assist in building a DevOps culture with our engineers for a better all-around software development and deployment experience
- Duties and tasks are varied and complex, and may require independent judgment; you should be fully competent in your own area of expertise
- Develop and maintain solutions for highly resilient services and infrastructure
- Implement automation to help deploy our services and maintain their operational health
- Contribute to the understanding of how our services are being used and help plan the capacity needs for future growth
What do we expect – experience and skills:
- Bachelor’s degree in Computer Science or related technical field, involving coding
- 3+ years of experience running large-scale customer-facing services
- 2-3 years of DevOps experience
- A strong desire and aptitude for system automation defines success in this role
- Linux experience, including expertise in system installation, configuration, administration, troubleshooting
- Experience with cloud-based providers such as AWS
- Experience with Kubernetes
- Experience with web-based API/restful service
- Experience with Configuration Management and infrastructure as code platforms (Ansible/Terraform)
- Experience with at least one scripting language (Python, Bash, JavaScript)
- Methodical approach to troubleshooting and documenting issues
- Experience in Docker orchestration and management
- Experience with implementing and maintaining CI/CD pipelines
Note: This vacancy is for CIMET, our client, for their Jaipur office.
Required Skills
• Automation is a part of your daily functions, so thorough familiarity with Unix Bourne shell scripting and Python is a critical survival skill.
• Integration and maintenance of automated tools
• Strong analytical and problem-solving skills
• Working experience in source control tools such as GIT/Github/Gitlab/TFS
• Have experience with modern virtualization technologies (Docker, KVM, AWS, OpenStack, or any orchestration platforms)
• Automation of deployment, customization, upgrades, and monitoring through modern DevOps tools (Ansible, Kubernetes, OpenShift, etc) • Advanced Linux admin experience
• Using Jenkins or similar tools
• Deep understanding of Container orchestration(Preferably Kubernetes )
• Strong knowledge of Object Storage(Preferably Cept on Rook)
• Experience in installing, managing & tuning microservices environments using Kubernetes & Docker both on-premise and on the cloud.
• Experience in deploying and managing spring boot applications.
• Experience in deploying and managing Python applications using Django, FastAPI, Flask.
• Experience in deploying machine learning pipelines/data pipelines using Airflow/Kubeflow /Mlflow.
• Experience in web server and reverse Proxy like Nginx, Apache Server, HAproxy
• Experience in monitoring tools like Prometheus, Grafana.
• Experience in provisioning & maintaining SQL/NoSQL databases.
Desired Skills
• Configuration software: Ansible
• Excellent communication and collaboration skills
• Good experience on Networking Technologies like a Load balancer, ACL, Firewall, VIP, DNS
• Programmatic experience with AWS, DO, or GCP storage & machine images
• Experience on various Linux distributions
• Knowledge of Azure DevOps Server
• Docker management and troubleshooting
• Familiarity with micro-services and RESTful systems
• AWS / GCP / Azure certification
• Interact with the Engineering for supporting/maintaining/designing backend infrastructure for product support
• Create fully automated global cloud infrastructure that spans multiple regions.
• Great learning attitude to the newest technology and a Team player
Cloud DevOps Architect
· Practices self-leadership and promotes learning in others by building relationships with cross- functional stakeholders; communicating information and providing advice to drive projects forward; adapting to competing demands and new responsibilities; providing feedback to others; mentoring junior team members; creating and executing plans to capitalize on strengths and improve opportunity areas; and adapting to and learning from change, difficulties, and feedback.
· Ensure appropriate translation of business requirements and functional specifications into physical program designs, code modules, stable application systems, and software solutions by partnering with Business Analysts and other team members to understand business needs and functional specifications.
· Build use cases/scenarios and reference architectures to enable rapid adoption of cloud services
in Product’s cloud journey.
· Provide insight into recommendations for technical solutions that meet design and functional needs.
· Experience or familiarity with Firewall/NGFW deployed in a variety of form factors (Checkpoint, Imperva, Palo Alto, Azure Firewall).
· Establish credibility & build deep relationships with senior technical individuals to enable them to be cloud advocates.
· Participate in deep architectural discussions to build confidence and ensure engineering success when building new and migrating existing applications, software. and services to AWS and GCP.
· Conduct deep-dive hands-on education/training sessions to transfer knowledge to DevOps and engineering teams considering or already using public cloud services.
· Be a cloud (Amazon Web Services, Google Cloud Platform) and DevOps evangelist and advise the stakeholders on cloud readiness, workload identification, migration and identifying the right multi cloud mix to effectively accomplish business objectives.
· Understands engineering requirements and architect scalable solutions adopting DevOps and leveraging advanced technologies such as AWS CodePipelines, AWS Code-Commit, ECS containers, API Gateway, CloudFormation Templates, AWS Kinesis, Splunk, Dome9, AWS-SQS, AWS-SNS, SonarCube, Microservices, and Kubernetes to realize stronger benefits and future proof outcomes for customer-facing applications.
· Build use cases/scenarios and reference architectures to enable rapid adoption of cloud services in product’s cloud journey.
· Be an integral part of the technology and architecture community in the public cloud partners (AWS, GCP, Azure) and bring in new services launched by cloud providers into 8K Miles Product Platform scope.
· Capture and share best-practice knowledge amongst the DevOps and Cloud community.
· Act as a technical liaison between product management, service engineering, and support teams.
· Qualification:
o Master’s Degree in Computer Science/Engineering with 12+ years’ experience in information technology (networking, infrastructure, database).
o Strong and recent exposure to AWS/GCP/Azure Cloud platforms and designing hybrid multi cloud solutions. Preferred to be Certified AWS Architect professional or similar
· Working knowledge of UNIX shell scripting.
· Strong hands-on programming experience in Python
· Working knowledge of data visualization tools – Tableau.
· Experience working in cloud environment — AWS.
· Experience working with modern tools in the Agile Software Development Life Cycle.
· Version Control Systems (Ex. Git, Github, Stash/Bitbucket), Knowledge Management (Ex. Confluence, Google Docs), Development Workflow (Ex. Jira), Continuous Integration (Ex. Bamboo), Real Time Collaboration (Ex. Hipchat, Slack).
- Good experience in AWS services like Elastic Compute Cloud(EC2), IAM, RDS, API Gateway, Cognito, etc.
- Using GIT, SonarQube, Ansible, Nexus, Nagios, etc.
- Strong experience in creating, importing and launching volumes with security groups, auto-scaling, Load Balancers, Fault-tolerant
- Experience in configuring Jenkins job with related Plugins for Building, Testing, and Continuous Deployment to accomplish the complete CI/CD.
Introduction
http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis.
Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls
Your Roles and Responsibilities
The Lead DevOps Engineer will be responsible for the management, monitoring and operation of our applications and services in production. The DevOps Engineer will be a hands-on person who can work independently or with minimal guidance and has the ability to drive the team’s deliverables by mentoring and guiding junior team members. You will work with the existing teams very closely and build on top of tools like Kubernetes, Docker and Terraform and support our numerous polyglot services.
Introducing a strong DevOps ethic into the rest of the team is crucial, and we expect you to lead the team on best practices in deployment, monitoring, and tooling. You'll work collaboratively with software engineering to deploy and operate our systems, help automate and streamline our operations and processes, build and maintain tools for deployment, monitoring, and operations and troubleshoot and resolve issues in our development, test and production environments. The position is based in our Bangalore office.
Primary Responsibilities
- Providing strategies and creating pathways in support of product initiatives in DevOps and automation, with a focus on the design of systems and services that run on cloud platforms.
- Optimizations and execution of the CI/CD pipelines of multiple products and timely promotion of the releases to production environments
- Ensuring that mission critical applications are deployed and optimised for high availability, security & privacy compliance and disaster recovery.
- Strategize, implement and verify secure coding techniques, integrate code security tools for Continuous Integration
- Ensure analysis, efficiency, responsiveness, scalability and cross-platform compatibility of applications through captured metrics, testing frameworks, and debugging methodologies.
- Technical documentation through all stages of development
- Establish strong relationships, and proactively communicate, with team members as well as individuals across the organisation
Requirements
- Minimum of 6 years of experience on Devops tools.
- Working experience with Linux, container orchestration and management technologies (Docker, Kubernetes, EKS, ECS …).
- Hands-on experience with "infrastructure as code" solutions (Cloudformation, Terraform, Ansible etc).
- Background of building and maintaining CI/CD pipelines (Gitlab-CI, Jenkins, CircleCI, Github actions etc).
- Experience with the Hashicorp stack (Vault, Packer, Nomad etc).
- Hands-on experience in building and maintaining monitoring/logging/alerting stacks (ELK stack, Prometheus stack, Grafana etc).
- Devops mindset and experience with Agile / SCRUM Methodology
- Basic knowledge of Storage , Databases (SQL and noSQL)
- Good understanding of networking technologies, HAProxy, firewalling and security.
- Experience in Security vulnerability scans and remediation
- Experience in API security and credentials management
- Worked on Microservice configurations across dev/test/prod environments
- Ability to quickly adapt new languages and technologies
- A strong team player attitude with excellent communication skills.
- Very high sense of ownership.
- Deep interest and passion for technology
- Ability to plan projects, execute them and meet the deadline
- Excellent verbal and written English communication.
About Us
We have grown over 1400% in revenues in the last year.
Interface.ai provides an Intelligent Virtual Assistant (IVA) to FIs to automate calls and customer inquiries across multiple channels and engage their customers with financial insights and upsell/cross-sell.
Our IVA is transforming financial institutions’ call centers from a cost to a revenue center.
Our core technology is built 100% in-house with several breakthroughs in Natural Language Understanding. Our parser is built based on zero-shot learning that helps us to launch industry-specific IVA that can achieve over 90% accuracy on Day-1.
We are 45 people strong with employees spread across India and US locations. Many of them come from ML teams at Apple, Microsoft, and Salesforce in the US along with enterprise architects with over 20+ years of experience building large-scale systems. Our India team consists of people from ISB, IIMs, and many who have been previously part of early-stage startups.
We are a fully remote team.
Founders come from Banking and Enterprise Technology backgrounds with previous experience scaling companies from scratch to $50M+ in revenues.
As a Site Reliability Engineer you will be in charge of:
- Designing, analyzing and troubleshooting large-scale distributed systems
- Engaging in cross-functional team discussions on design, deployment, operation, and maintenance, in a fast-moving, collaborative set up
- Building automation scripts to validate the stability, scalability, and reliability of interface.ai’s products & services as well as enhance interface.ai’s employees’ productivity
- Debugging and optimizing code and automating routine tasks
- Troubleshoot and diagnose issues (hardware or software), propose and implement solutions to ensure they occur with reduced frequency
- Perform the periodic on-call duty to handle security, availability, and reliability of interface.ai’s products
- You will follow and write good code and solid engineering practices
Requirements
You can be a great fit if you are :
- Extremely self motivated
- Ability to learn quickly
- Growth Mindset (read this if you don't know what it means - https://www.amazon.com/Mindset-Psychology-Carol-S-Dweck/dp/0345472322" target="_blank">link)
- Emotional Maturity (read this if you don't know what it means - https://medium.com/@krisgage/15-signs-of-emotional-maturity-38b1a2ab9766" target="_blank">link)
- Passionate about the possibilities at the intersection of AI + Banking
- Worked in a startup of 5 to 30 employees
- Developer with a strong interest in systems Design. You will be building, maintaining, and scaling our cloud infrastructure through software tooling and automation.
- 4-8 years of industry experience developing and troubleshooting large-scale infrastructure on the cloud
- Have a solid understanding of system availability, latency, and performance
- Strong programming skills in at least one major programming language and the ability to learn new languages as needed
- Strong System/network debugging skills
- Experience with management/automation tools such as Terraform/Puppet/Chef/SALT
- Experience with setting up production-level monitoring and telemetry
- Expertise in Container management & AWS
- Experience with kubernetes is a plus
- Experience building CI/CD pipelines
- Experience working with Web sockets, Redis, Postgres, Elastic search, Logstash
- Experience working in an agile team environment and proficient understanding of code versioning tools, such as Git.
- Ability to effectively articulate technical challenges and solutions.
- Proactive outlook for ways to make our systems more reliable
- API
- AWS
Need a strong Amazon Web Service developer with experience developing APIs using Lambda functions. The candidate must have a very good familiarity with API and deployment of API in AWS knowledge are mandatory.

