50+ Kubernetes Jobs in India
Apply to 50+ Kubernetes Jobs on CutShort.io. Find your next job, effortlessly. Browse Kubernetes Jobs and apply today!
We are looking for a seasoned DevOps Engineer with a strong background in solution architecture, ideally from the Banking or BFSI (Banking, Financial Services, and Insurance) domain. This role is crucial for implementing scalable, secure infrastructure and CI/CD practices tailored to the needs of high-compliance, high-availability environments. The ideal candidate will have deep expertise in Docker, Kubernetes, cloud platforms, and solution architecture, with knowledge of ML/AI and database management as a plus.
Key Responsibilities:
● Infrastructure & Solution Architecture: Design secure, compliant, and high-
performance cloud infrastructures (AWS, Azure, or GCP) optimized for BFSI-specific
applications.
● Containerization & Orchestration: Lead Docker and Kubernetes initiatives,
deploying applications with a focus on security, compliance, and resilience.
● CI/CD Pipelines: Build and maintain CI/CD pipelines suited to BFSI workflows,
incorporating automated testing, security checks, and rollback mechanisms.
● Cloud Infrastructure & Database Management: Manage cloud resources and
automate provisioning using Terraform, ensuring security standards. Optimize
relational and NoSQL databases for BFSI application needs.
● Monitoring & Incident Response: Implement monitoring and alerting (e.g.,
Prometheus, Grafana) for rapid incident response, ensuring uptime and reliability.
● Collaboration: Work closely with compliance, security, and development teams,
aligning infrastructure with BFSI standards and regulations.
Qualifications:
● Education: Bachelor’s or Master’s degree in Computer Science, Engineering,
Information Technology, or a related field.
● Experience: 5+ years of experience in DevOps with cloud infrastructure and solution
architecture expertise, ideally in ML/AI environments.
● Technical Skills:
○ Cloud Platforms: Proficient in AWS, Azure, or GCP; certifications (e.g., AWS
Solutions Architect, Azure Solutions Architect) are a plus.
○ Containerization & Orchestration: Expertise with Docker and Kubernetes,
including experience deploying and managing clusters at scale.
○ CI/CD Pipelines: Hands-on experience with CI/CD tools like Jenkins, GitLab
CI, or GitHub Actions, with automation and integration for ML/AI workflows
preferred.
○ Infrastructure as Code: Strong knowledge of Terraform and/or
CloudFormation for infrastructure provisioning.
○ Database Management: Proficiency in relational databases (PostgreSQL,
MySQL) and NoSQL databases (MongoDB, DynamoDB), with a focus on
optimization and scalability.
○ ML/AI Infrastructure: Experience supporting ML/AI pipelines, model serving,
and data processing within cloud or hybrid environments.
○ Monitoring and Logging: Proficient in monitoring tools like Prometheus and
Grafana, and log management solutions like ELK Stack or Splunk.
○ Scripting and Automation: Strong skills in Python, Bash, or PowerShell for
scripting and automating processes.
Job Description:
- Experience with Java 4 to 8 years, Spring Boot, Microservices, Angular, Docker, and Kubernetes
Knowledge of multi-threading concepts, TCPIP, databases, and REST-based JSON APIs
- Experience with build and deployment tools: Maven, Git, JUnit
- Experience building and working with DevOps Toolchains (Github actions, Jenkins)
Experience with responsive UI development
Demonstrates great communication skills and initiative to solve problems and convey solutions to peers and product owners.
Experience with the scrum process
Experience with event-driven architecture.
Knowledge of UI testing and continuous integration
Working knowledge of TDD TDD mindset
- Pair Programming experience
Functional knowledge of the Accounts Payable domain is an added advantage.
Client Located in Navi Mumbai Location
Job Description: OpenShift Engineer
Location: CBD Belapur, Navi Mumbai
Domain: Banking
Role Overview:
We are seeking an experienced OpenShift Engineer with 5+ years of experience to
manage and optimize Red Hat OpenShift environments, implement CI/CD pipelines,
and support containerized application deployments in the banking domain.
Key Responsibilities:
e Design, deploy, and manage Red Hat OpenShift clusters for high availability and
performance.
e Implement and maintain CI/CD pipelines to automate build, test, and
deployment processes.
e Containerize and deploy applications to the OpenShift platform.
e Manage container orchestration and ensure seamless deployment strategies.
e Support periodic off-hours software installations and upgrades.
e Collaborate with development teams to resolve deployment issues.
Requirements:
e 5+ years of experience in OpenShift, Kubernetes, and Docker.
e Expertise in CI/CD tools like Jenkins or GitLab Cl.
e Strong understanding of container security and compliance best practices.
e Banking domain experience is a plus.
Sarvaha Systems, a niche software development company, collaborates with some of the best funded startups and established businesses worldwide. We are seeking an experienced Lead Software Engineer with minimum 7 years of experience in designing systems and backend services. In this role, you will evaluate new requirements, develop architecture models, and integrate critical business requirements like high availability, redundancy, and disaster recovery into infrastructure designs. The role requires working with a globally distributed team across US, Europe and India.
Key Responsibilities
- Design, develop, and implement new products and interfaces.
- Build scalable backend in Java & modern technologies in a micro-services architecture.
- Leverage modern platform technologies like Docker, Kubernetes, and related tools.
- Create technical product documentation and propose product improvements.
- Collaborate with Product Owners to break down Epics and Stories into actionable tasks.
- Guide teams in adopting best practices and ensuring high-quality code standards.
- Perform maintenance tasks, including resolving software dependencies on existing products.
- Report key activities and progress to Engineering Managers and Product Owners.
Skills Required
- BE/BTech/MTech (CS/IT or MCA), with an emphasis in Software Engineering, is highly preferable
- Strong experience with Java and its frameworks (Java 8 or above).
- Deep understanding of Spring, Spring Boot, and Hibernate (JPA).
- Hands-on experience with highly scalable Micro-services in clustered/multi-node environments.
- Proficiency with orchestration platforms and tools like Kubernetes and Docker.
- Experience with NoSQL/SQL databases (e.g., PostgreSQL, MariaDB, MongoDB).
- Knowledge of networking protocols such as TCP/IP, HTTP, and SSL.
- Expertise in Linux environments and security best practices.
Desired Skills
- Experience with Kafka for event-driven architectures.
- Familiarity with Linux shell scripting and Python.
- Hands-on experience with deployment tooling like Ansible or Helm.
- Strong ability to structure and present technical information to stakeholders.
- Fluency in English (both verbal and written).
- Additional language skills are a plus.
Position Benefits
- Top notch remuneration and excellent growth opportunities
- An excellent, no-nonsense work environment with the very best people to work with
- Highly challenging software implementation problems
- Hybrid Mode.
Job Title : Senior Software Engineer
Location : Gurugram (Full-time)
Job Description :
- We are looking for a skilled and experienced Java Engineer to join our dynamic team.
- The ideal candidate will have 4 to 10 Years of hands-on experience in software development, with a proven experience as Java Developer with a strong focus on Spring and Spring Boot, Relational Databases and AWS technologies.
- Strong understanding of monolithic & microservices architecture. You will play a crucial role in designing, developing, and maintaining our applications.
- Ensuring their performance, quality and responsiveness.
Key Responsibilities :
● Design, develop, and maintain scalable applications using Java, Spring and Spring Boot.
● Develop and manage relational databases. Should be able to write complex SQL queries.
● Ensure the best possible performance, quality, and responsiveness of the applications.
● Have strong debugging skills to identify bottlenecks and bugs, and devise solutions to mitigate and address these issues.
● Deploy, manage, and scale applications on AWS.
● Conduct code reviews and integration testing to ensure software quality and reliability.
● Collaborate with front-end developers to integrate user-facing elements with server-side logic.
● Collaborate with cross-functional teams to define, design, and ship new features.
● Stay updated with emerging technologies and industry trends.
Qualifications :
● Bachelor’s degree in Computer Science, Information Technology, or a related field.
● 4 to 10 Years of experience in software development.
● Proficient in Java 8+, Spring and Spring Boot.
● Experience with PostgreSQL, MySQL or other relational databases.
● Experience with microservices architecture.
● Understanding of Rest API design and development.
● Hands on experience with Unit Testing frameworks such as Junit, Mockito.
● Experience with version control systems such as Git.
● Solid understanding of object-oriented programming.
● Strong problem-solving skills and attention to detail.
● Excellent communication and teamwork skills.
Preferred Qualifications :
● Strong knowledge of AWS services and best practices.
● Knowledge of containerization technologies like Docker and Kubernetes.
● Familiarity with CI/CD pipelines and DevOps practices.
ABOUT THE TEAM:
The production engineering team is responsible for the key operational pillars (Reliability, Observability, Elasticity, Security, and Governance) of the Cloud infrastructure at Swiggy. We thrive to excel & continuously improve on these key operational pillars. We design, build, and operate Swiggy’s cloud infrastructure and developer platforms, to provide a seamless experience to our internal and external consumers.
What qualities are we looking for:
10+ years of professional experience in infrastructure, production engineering
Strong design, debugging, and problem-solving skills
Proficiency in at least one programming language like Python, GoLang or Java.
B Tech/M Tech in Computer Science or equivalent from a reputed college.
Hands-on experience with AWS and Kubernetes or similar cloud/infrastructure platforms
Hands-on with DevOps principles and practices ( Everything-as-a-code, CI/CD, Test everything, proactive monitoring etc)
Deep understanding of OS/virtualization/Containerization, network protocols & concepts
Exposure to modern-day infrastructure technologies, expertise in building and operating distributed systems.
Hands-on coding on any of the languages like Python or GoLang.
Familiarity with software engineering practices including unit testing, code reviews, and design documentation.
Technically mentor and lead the team towards engineering and operational excellence
Act like an owner, strive for excellence.
What will you get to do here?
Be part of a Culture where Customer Obsession, Ownership, Teamwork, Bias for Action and Insist on High standards are a way of life
Coming up with the best practices to help the team achieve their technical tasks and continually thrive in improving the technology of the team
Be a hands-on engineer, ensure frameworks/infrastructure built is well designed, scalable & are of high quality.
Build and/or operate platforms that are highly available, elastic, scalable, operable and observable
Experiment with new & relevant technologies and tools, and drive adoption while measuring yourself on the impact you can create.
Implementation of long-term technology vision for the team.
Build/Adapt and implement tools that empower the Swiggy engineering teams to self-manage the infrastructure and services owned by them.
You will identify, articulate, and lead various long-term tech vision, strategies, cross-cutting initiatives and architecture redesigns.
Design systems and make decisions that will keep pace with the rapid growth of Swiggy. Document your work and decision-making processes, and lead presentations and discussions in a way that is easy for others to understand.
Creating architectures & designs for new solutions around existing/new areas. Decide technology & tool choices for the team.
a team excelled in providing top-notch business solutions to industries such as Ecommerce, Marketing, Banking and Finance, Insurance, Transport and many more. For a generation that is driven by data, Insights & decision making, we help businesses to make the best possible use of data and enable businesses to thrive in this competitive space. Our expertise spans across Data, Analytics and Engineering to name a few.
Senior DevOps Engineer with 5+ years of experience to enhance cloud infrastructure and optimize application performance.
Qualifications:
- Bachelor’s degree in Computer Science or related field.
- 5+ years of DevOps experience with strong scripting skills (shell, Python, Ruby).
- Familiarity with open-source technologies and application development methodologies.
- Experience in optimizing both stand-alone and distributed systems.
Key Responsibilities:
- Design and maintain DevOps practices for seamless application deployment.
- Utilize AWS tools (EBS, S3, EC2) and automation technologies (Ansible, Terraform).
- Manage Docker containers and Kubernetes environments.
- Implement CI/CD pipelines with tools like Jenkins and GitLab.
- Use monitoring tools (Datadog, Prometheus) for system reliability.
- Collaborate effectively across teams and articulate technical choices.
at Sarvaha Systems Private Limited
Sarvaha would like to welcome a Senior Software Engineer or an aspiring architect with minimum 5 years of excellent experience in writing applications using Net Core 6+ technologies. This is a hands-on role working on a product that gets delivered to world’s largest organizations and governments. The product is in the space of situational awareness and integrates physical security infrastructure to help keep critical assets safe such as a) Corporate headquarters b) Government buildings c) Rail, Sea and air transport hubs and d) Smart Cities.
Sarvaha is a niche software development company that works with some of the best funded startups and established companies across the globe. Please visit our website at http:// www.sarvaha.com to know more about us.
Key Responsibilities
- Work as a lead contributor (designing, developing, testing) for creating technical solutions
- Working with Engineering Managers, Product Owners, Architects and QA Engineers
- Developing our Web and server solutions that integrate edge hardware devices into a situational aware platform for our customers. Examples of the different types of problems we ask the development team to solve: Displaying geospatially aware smart city data, Displaying geospatial tracking data, Saving video feed snapshots, Dynamic near real-time event processing, Video overlay integration, Access control management, Distributed site infrastructure management etc.
- Implement .Net Core 6+ C# solutions and regularly integrate with functional library frameworks within our AWS Kubernetes hosted micro-services.
- Contribute to scoping, estimating, and proposing technical solutions & development
- Investigate new technologies, provide analysis and recommendations on technical choices
- Responsible for providing hands-on expert level assistance to developers for technical issues
- Work with other teams such as DevOPs, pre-sales & sales, partners, clients etc. as an SME
Skills Required
- BE/BTech/MTech (CS/IT or MCA), with an emphasis in Software Engineering
- Solid grasp of data structures and object-oriented principles in practical applications
- Minimum 6 years of experience as a Windows developer using .NET Core 6+ and C#
- Experience with micro-services interacting within Kubernetes clusters using REST (OpenAPI).
- Strong API design experience & hardware integration exposure
- Experience using Containers (Docker, Kubernetes)
- Excellent verbal and written communication & attention to details
- Agile development experience including working with JIRA & Confluence
- An attitude of craftsmanship and constant learning of new skill
- Event stream experience (Kafka)
- Experience with concurrent distributed systems
- Interest or experience in near real time computing/communication
Position Benefits
- Top notch remuneration and excellent growth opportunities
- An excellent, no-nonsense work environment with the very best people to work with
- Highly challenging software implementation problems
- Hybrid work model with established remote work options.
About HighLevel:
HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have 1000+ employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Website - https://www.gohighlevel.com/
YouTube Channel - https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g
Blog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/
Our Customers:
HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 500K businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:
We work at scale; our infrastructure handles around 3 Billion+ API hits & 2 Billion+ message events monthly and over 25M views of customer pages daily. We also handle over 80 Terabytes of data across 5 Databases.
About the Team:
The Expansion Products team is responsible for driving volumetric & usage based upgrades and upsells within the platform to maximize revenue potential (apart from the subscription revenue). We do this by building innovative products & features that solve real-world problems for agencies and allow them to consolidate their offering to their clients in a single platform packaged under their white-labled brand. The expansion products team focuses exclusively on products that can demonstrate adoption, drive up engagement in target segments and are easily monetizable. This team handles multiple product areas including Phone System, email system, online listing integration, WordPress Hosting, Memberships & Courses, Mobile Apps, etc.
About the Role:
We’re looking for a skilled Senior Software Engineer for Membership Platform and help us take our platform’s infrastructure to the next level. In this role, you'll focus on keeping our databases fast and reliable, improving and managing the infrastructure, and reducing technical debt so we can scale smoothly as we grow. You’ll play a key part in ensuring our platform is stable, secure, and easy for our product teams to work with. This is an exciting opportunity to work on large-scale systems and make a direct impact on the experience of millions of users.
Responsibilities:
- Optimize and manage scalable databases to ensure high performance and reliability.
- Automate and maintain infrastructure using IaC tools, CI/CD pipelines, and best security practices.
- Identify, prioritize, and address technical debt to improve performance and maintainability.
- Implement monitoring and observability solutions to support high availability and incident response.
- Collaborate with cross-functional teams and document processes, mentoring engineers and sharing knowledge.
Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
- 4+ years in platform engineering, with expertise in large-scale databases and infrastructure.
- Experience in Full stack engineering with Node.js and modern Javascript frameworks like Vue.js[preferred], React.js, Angular.
- Strong background in cloud platforms (AWS, GCP, or Azure)
- Proficient in building scalable applications and should be comfortable understanding the flow of the software
- Experience with relational/non-relational databases ex: MySQL / MongoDB / Firestore
- Experience with monitoring tools (e.g., Prometheus, Grafana) and containerization (Docker, Kubernetes a plus) and video streaming knowledge is a plus.
Greetings!
Wissen Technology is hiring for Kubernetes Lead/Admin.
Required:
- 7+ years of relevant experience in Kubernetes
- Must have hands on experience on Implementation, CI/CD pipeline, EKS architecture, ArgoCD & Statefulset services.
- Good to have exposure on scripting languages
- Should be open to work from Chennai
- Work mode will be Hybrid
Company profile:
Company Name : Wissen Technology
Group of companies in India : Wissen Technology & Wissen Infotech
Work Location - Bangalore
Website : www.wissen.com
Wissen Thought leadership : https://www.wissen.com/articles/
LinkedIn: https://www.linkedin.com/company/wissen-technology
Client based at Bangalore location.
Job Title: Solution Architect
Work Location: Tokyo
Experience: 7-10 years
Number of Positions: 3
Job Description:
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Responsibilities:
- Collaborate with stakeholders to understand business needs and translate them into scalable and efficient technical solutions.
- Design and implement complex systems involving Machine Learning, Cloud Computing (at least two major clouds such as AWS, Azure, or Google Cloud), and Full Stack Development.
- Lead the design, development, and deployment of cloud-native applications with a focus on NoSQL databases, Python, and Kubernetes.
- Implement algorithms and provide scalable solutions, with a focus on performance optimization and system reliability.
- Review, validate, and improve architectures to ensure high scalability, flexibility, and cost-efficiency in cloud environments.
- Guide and mentor development teams, ensuring best practices are followed in coding, testing, and deployment.
- Contribute to the development of technical documentation and roadmaps.
- Stay up-to-date with emerging technologies and propose enhancements to the solution design process.
Key Skills & Requirements:
- Proven experience (7-10 years) as a Solution Architect or similar role, with deep expertise in Machine Learning, Cloud Architecture, and Full Stack Development.
- Expertise in at least two major cloud platforms (AWS, Azure, Google Cloud).
- Solid experience with Kubernetes for container orchestration and deployment.
- Strong hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB, etc.).
- Proficiency in Python, including experience with ML frameworks (such as TensorFlow, PyTorch, etc.) and libraries for algorithm development.
- Must have implemented at least two algorithms (e.g., classification, clustering, recommendation systems, etc.) in real-world applications.
- Strong experience in designing scalable architectures and applications from the ground up.
- Experience with DevOps and automation tools for CI/CD pipelines.
- Excellent problem-solving skills and ability to work in a fast-paced environment.
- Strong communication skills and ability to collaborate with cross-functional teams.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Skills:
- Experience with microservices architecture and containerization.
- Knowledge of distributed systems and high-performance computing.
- Certifications in cloud platforms (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.).
- Familiarity with Agile methodologies and Scrum.
- Knowing Japanese Language is an additional advantage for the Candidate. Not mandatory.
We're seeking an experienced Backend Software Engineer to join our team.
As a backend engineer, you will be responsible for designing, developing, and deploying scalable backends for the products we build at NonStop.
This includes APIs, databases, and server-side logic.
Responsibilities:
- Design, develop, and deploy backend systems, including APIs, databases, and server-side logic
- Write clean, efficient, and well-documented code that adheres to industry standards and best practices
- Participate in code reviews and contribute to the improvement of the codebase
- Debug and resolve issues in the existing codebase
- Develop and execute unit tests to ensure high code quality
- Work with DevOps engineers to ensure seamless deployment of software changes
- Monitor application performance, identify bottlenecks, and optimize systems for better scalability and efficiency
- Stay up-to-date with industry trends and emerging technologies; advocate for best practices and new ideas within the team
- Collaborate with cross-functional teams to identify and prioritize project requirements
Requirements:
- At least 2+ years of experience building scalable and reliable backend systems
- Strong proficiency in either of the programming languages such as Python, Node.js, Golang, RoR
- Experience with either of the frameworks such as Django, Express, gRPC
- Knowledge of database systems such as MySQL, PostgreSQL, MongoDB, Cassandra, or Redis
- Familiarity with containerization technologies such as Docker and Kubernetes
- Understanding of software development methodologies such as Agile and Scrum
- Ability to demonstrate flexibility wrt picking a new technology stack and ramping up on the same fairly quickly
- Bachelor's/Master's degree in Computer Science or related field
- Strong problem-solving skills and ability to collaborate effectively with cross-functional teams
- Good written and verbal communication skills in English
at Phonologies (India) Private Limited
Job Description
Phonologies is seeking a Senior Data Engineer to lead data engineering efforts for developing and deploying generative AI and large language models (LLMs). The ideal candidate will excel in building data pipelines, fine-tuning models, and optimizing infrastructure to support scalable AI systems for enterprise applications.
Role & Responsibilities
- Data Pipeline Management: Design and manage pipelines for AI model training, ensuring efficient data ingestion, storage, and transformation for real-time deployment.
- LLM Fine-Tuning & Model Lifecycle: Fine-tune LLMs on domain-specific data, and oversee the model lifecycle using tools like MLFlow and Weights & Biases.
- Scalable Infrastructure: Optimize infrastructure for large-scale data processing and real-time LLM performance, leveraging containerization and orchestration in hybrid/cloud environments.
- Data Management: Ensure data quality, security, and compliance, with workflows for handling sensitive and proprietary datasets.
- Continuous Improvement & MLOps: Apply MLOps/LLMOps practices for automation, versioning, and lifecycle management, while refining tools and processes for scalability and performance.
- Collaboration: Work with data scientists, engineers, and product teams to integrate AI solutions and communicate technical capabilities to business stakeholders.
Preferred Candidate Profile
- Experience: 5+ years in data engineering, focusing on AI/ML infrastructure, LLM fine-tuning, and deployment.
- Technical Skills: Advanced proficiency in Python, SQL, and distributed data tools.
- Model Management: Hands-on experience with MLFlow, Weights & Biases, and model lifecycle management.
- AI & NLP Expertise: Familiarity with LLMs (e.g., GPT, BERT) and NLP frameworks like Hugging Face Transformers.
- Cloud & Infrastructure: Strong skills with AWS, Azure, Google Cloud, Docker, and Kubernetes.
- MLOps/LLMOps: Expertise in versioning, CI/CD, and automating AI workflows.
- Collaboration & Communication: Proven ability to work with cross-functional teams and explain technical concepts to non-technical stakeholders.
- Education: Degree in Computer Science, Data Engineering, or related field.
Perks and Benefits
- Competitive Compensation: INR 20L to 30L per year.
- Innovative Work Environment for Personal Growth: Work with cutting-edge AI and data engineering tools in a collaborative setting, for continuous learning in data engineering and AI.
Supercoder is hiring for a partner company. Find the details below:
- Company Location: Korea (the Republic of)
- Job Type: Remote (Full Time)
- Salary: Negotiable based on experience
- Hiring Process: Mentioned at the end
Job Overview
Seeking an experienced Mendix Developer to join our team on a long-term remote basis. In this role, you will be responsible for designing, developing, and implementing applications on the Mendix platform, leveraging cloud environments like AWS, Azure, and local platforms. Your main goal will be to create functional and user-friendly applications that meet diverse client requirements.
Key Responsibilities
- Contribute to the development of advanced CMMS (Computerized Maintenance Management Systems) and MES (Manufacturing Execution Systems) solutions.
- Collaborate with cross-functional teams to gather requirements and understand client needs.
- Design, develop, and deploy scalable applications on the Mendix platform.
- Customize and configure applications to meet client specifications.
- Implement best practices for application security, testing, and performance.
- Troubleshoot and resolve technical issues to ensure smooth application functionality.
- Conduct unit testing and quality assurance to ensure high standards in software performance.
- Stay updated with the latest Mendix features, tools, and development techniques.
- Provide technical guidance to junior developers and support overall team development.
- Document and maintain application code to enhance team collaboration and future project accessibility.
Required Qualifications
- Education: Bachelor’s degree in Computer Science, Software Engineering, or a related field.
- Experience: Minimum of 5-6 years of experience in Mendix development.
- Proven experience with Mendix Solutions, Modules, and Widgets.
- Certifications: Mendix Advanced or Expert certification is highly preferred.
Technical Skills:
- Proficient in Mendix platform with hands-on experience in app design and development.
- Knowledge of Kubernetes (EKS, AKS) for deployment.
- Familiarity with IoT, CMMS, and MES for industry-specific application development.
- Experience in database management and developer-level database technologies.
Soft Skills:
- Strong problem-solving and analytical abilities.
- Excellent communication and interpersonal skills.
- Ability to work independently and collaboratively within a team.
Preferred Qualifications
- Prior experience with Siemens products.
- Familiarity with Agile development methodologies.
- Experience in constructing MSA (Microservices Architecture) and multi-tenancy in Mendix environments.
Tech Stack
- Primary Technologies: Mendix, Kubernetes, Java, IoT, .NET.
- Additional Skills: Knowledge of cloud environments (AWS, Azure) and experience with database technologies.
Hiring Process:
- Sign up on Supercoder's platform via the Apply Now link
- Complete your profile.
- Apply for all the jobs you like
- Complete an online test and a technical interview with Supercoder
- Developers who pass the above tests are proposed to the company.
- The client reviews the developer's profile and arranges a final interview.
P.S: Supercoder does not charge any registration or commission from developers
Cleantech Industry Resources
Core Responsibilities
Front-end Development:
o Develop web applications using React, flask. o Implement responsive design to ensure user-friendly interfaces.
o Utilize mapping libraries (e.g., Leaflet, Mapbox) to display geospatial data.
Back-end Integration:
o Work with APIs and manage data flow for project analytics and management.
o Enhance features for robust data handling and real-time project analysis.
Cloud Deployment:
o Deploy and manage web applications on AWS and similar platforms. o Optimize for performance, scalability, and storage efficiency.
GIS and Mapping Tools:
o Develop tools using GIS data like KML and Shapefiles.
Job Title: DevOps Engineer
Location: Remote
Type: Full-time
About Us:
At Tese, we are committed to advancing sustainability through innovative technology solutions. Our platform empowers SMEs, financial institutions, and enterprises to achieve their Environmental, Social, and Governance (ESG) goals. We are looking for a skilled and passionate DevOps Engineer to join our team and help us build and maintain scalable, reliable, and efficient infrastructure.
Role Overview:
As a DevOps Engineer, you will be responsible for designing, implementing, and managing the infrastructure that supports our applications and services. You will work closely with our development, QA, and data science teams to ensure smooth deployment, continuous integration, and continuous delivery of our products. Your role will be critical in automating processes, enhancing system performance, and maintaining high availability.
Key Responsibilities:
- Infrastructure Management:
- Design, implement, and maintain scalable cloud infrastructure on platforms such as AWS, Google Cloud, or Azure.
- Manage server environments, including provisioning, monitoring, and maintenance.
- CI/CD Pipeline Development:
- Develop and maintain continuous integration and continuous deployment pipelines using tools like Jenkins, GitLab CI/CD, or CircleCI.
- Automate deployment processes to ensure quick and reliable releases.
- Configuration Management and Automation:
- Implement infrastructure as code (IaC) using tools like Terraform, Ansible, or CloudFormation.
- Automate system configurations and deployments to improve efficiency and reduce manual errors.
- Monitoring and Logging:
- Set up and manage monitoring tools (e.g., Prometheus, Grafana, ELK Stack) to track system performance and troubleshoot issues.
- Implement logging solutions to ensure effective incident response and system analysis.
- Security and Compliance:
- Ensure systems are secure and compliant with industry standards and regulations.
- Implement security best practices, including identity and access management, network security, and vulnerability assessments.
- Collaboration and Support:
- Work closely with development and QA teams to support application deployments and troubleshoot issues.
- Provide support for infrastructure-related inquiries and incidents.
Qualifications:
- Education:
- Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
- Experience:
- 3-5 years of experience in DevOps, system administration, or related roles.
- Hands-on experience with cloud platforms such as AWS, Google Cloud Platform, or Azure.
- Technical Skills:
- Proficiency in scripting languages like Bash, Python, or Ruby.
- Strong experience with containerization technologies like Docker and orchestration tools like Kubernetes.
- Knowledge of configuration management tools (Ansible, Puppet, Chef).
- Experience with CI/CD tools (Jenkins, GitLab CI/CD, CircleCI).
- Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK Stack).
- Understanding of networking concepts and security best practices.
- Soft Skills:
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration abilities.
- Ability to work in a fast-paced environment and manage multiple tasks.
Preferred Qualifications:
- Experience with infrastructure as code (IaC) tools like Terraform or CloudFormation.
- Knowledge of microservices architecture and serverless computing.
- Familiarity with database administration (SQL and NoSQL databases).
- Experience with Agile methodologies and working in a Scrum or Kanban environment.
- Passion for sustainability and interest in ESG initiatives.
Benefits:
- Competitive salary and benefits package,and performance bonuses.
- Flexible working hours and remote work options.
- Opportunity to work on impactful projects that promote sustainability.
- Professional development opportunities, including access to training and conferences.
Product based company
Responsibilities:
Lead the design and development of sophisticated, high availability, and secured
server-side applications with a primary focus on Golang.
● Collaborate with cross-functional teams to understand requirements, architect
solutions, and deliver high-quality software products.
● Mentor and guide junior engineers, sharing your engineering expertise and best
practices to foster skill development within the team.
● Analyze and optimize performance, scalability, and reliability of existing Golang
applications, making strategic improvements where necessary.
● Design and implement automated unit and integration tests to ensure code quality,
maintainability, and stability.
● Stay up-to-date with the latest advancements in software technologies,
recommending their adoption when appropriate.
● Champion code reviews, architectural discussions, and technical documentation to
maintain high development standards.
● Troubleshoot and resolve complex issues, providing innovative solutions to overcome
challenges.
● Contribute to the recruitment and hiring process by participating in interviews,
evaluating candidates, and providing input on hiring decisions.
Requirements
Bachelor's or Master's degree in Computer Science, or a related field.
● 3+ years of experience in software development, with substantial experience in
Golang and cloud infrastructure.
● Expert-level proficiency in designing and developing high-performance, concurrent
applications with Golang.
● Experience with distributed systems, microservices architecture, and containerization
(e.g., Docker, Kubernetes).
● Solid knowledge of software testing methodologies and tools, including unit testing
and integration testing for Golang applications.
● Demonstrated ability to lead projects, collaborate effectively with teams, and mentor
junior engineers.
● Excellent problem-solving and analytical skills, with the ability to tackle complex
technical challenges.
● Having prior experience in the FinTech domain would be an added advantage.
About The Role:
The products/services of Eclat Engineering Pvt. Ltd. are being used by some of the leading institutions in India and abroad. Our services/Products are rapidly growing in demand. We are looking for a capable and dynamic Senior DevOps engineer to help setup, maintain and scale the infrastructure operations. This Individual will have the challenging responsibility of channelling our IT infrastructure and offering customer services with stringent international standard levels of service quality. This individual will leverage the latest IT tools to automate and streamline the delivery of our services while implementing industry-standard processes and knowledge management.
Roles & Responsibilities:
- Infrastructure and Deployment Automation: Design, implement, and maintain automation for infrastructure
provisioning and application deployment. Own the CI/CD pipelines and ensure they are efficient, reliable, and
scalable.
- System Monitoring and Performance: -Take ownership of monitoring systems and ensure the health and
performance of the infrastructure. Proactively identify and address performance bottlenecks and system issues.
- Cloud Infrastructure Management: Manage cloud infrastructure (e.g., AWS, Azure, GCP) and optimize resource
usage. Implement cost-saving measures while maintaining scalability and reliability.
- Configuration Management: Manage configuration management tools (e.g., Ansible, Puppet, Chef) to ensure
consistency across environments. Automate configuration changes and updates.
- Security and Compliance: Own security policies, implement best practices, and ensure compliance with industry
standards. Lead efforts to secure infrastructure and applications, including patch management and access controls.
- Collaboration with Development and Operations Teams: Foster collaboration between development and
operations teams, promoting a DevOps culture. Be the go-to person for resolving cross-functional infrastructure
issues and improving the development process.
- Disaster Recovery and Business Continuity: Develop and maintain disaster recovery plans and procedures. Ensure
business continuity in the event of system failures or other disruptions.
- Documentation and Knowledge Sharing: Create and maintain comprehensive documentation for configurations,
processes, and best practices. Share knowledge and mentor junior team members.
- Technical Leadership and Innovation: Stay up-to-date with industry trends and emerging technologies. Lead efforts
to introduce new tools and technologies that enhance DevOps practices.
- Problem Resolution and Troubleshooting: Be responsible for diagnosing and resolving complex issues related to
infrastructure and deployments. Implement preventive measures to reduce recurring problems.
Requirements:
● B.E / B.Tech / M.E / M.Tech / MCA / M.Sc.IT (if not should be able to demonstrate required skills)
● Overall 3+ years of experience in DevOps and Cloud operations specifically in AWS.
● Experience with Linux Administrator
● Experience with microservice architecture, containers, Kubernetes, and Helm is a must
● Experience in Configuration Management preferably Ansible
● Experience in Shell Scripting is a must
● Experience in developing and maintaining CI/CD processes using tools like Gitlab, Jenkins
● Experience in logging, monitoring and analytics
● An Understanding of writing Infrastructure as a Code using tools like Terraform
● Preferences - AWS, Kubernetes, Ansible
Must Have:
● Knowledge of AWS Cloud Platform.
● Good experience with microservice architecture, Kubernetes, helm and container-based technologies
● Hands-on experience with Ansible.
● Should have experience in working and maintaining CI/CD Processes.
● Hands-on experience in version control tools like GIT.
● Experience with monitoring tools such as Cloudwatch/Sysdig etc.
● Sound experience in administering Linux servers and Shell Scripting.
● Should have a good understanding of IT security and have the knowledge to secure production environments (OS and server software).
Job Title: Devops Engineer
Location: Nagercoil
Experience: 7+Years
Company: Finsurge Private Limited.
About FinSurge:
As a global Murex business partner, we offer industry-leading financial solutions, including SaaS product offerings tailored for banking and financial institutions. Our services encompass Murex consultancy and software product development, all built on cutting-edge technologies.This positions us at the forefront of innovation in the financial sector.
Our team of experienced Murex consultants and developers, based in Singapore, India, Malaysia, Hong Kong, Indonesia, the UK, and the US, is committed to assisting clients in capital markets globally.
Job Summary:
We are looking for a skilled DevOps Engineer with expertise in Azure to join our team. The ideal candidate will be responsible for building and maintaining CI/CD pipelines, managing cloud infrastructure, and ensuring the reliability and performance of our applications. This role requires a blend of technical expertise, problem-solving skills, and a collaborative mindset.
Key Responsibilities:
- Design, implement, and manage CI/CD pipelines using Azure DevOps.
- Collaborate with development teams to streamline deployment processes and improve operational efficiency.
- Manage Azure cloud resources, including virtual machines, storage accounts, and databases.
- Monitor and optimize system performance and reliability using Azure monitoring tools.
- Automate deployment processes using Infrastructure as Code (IaC) tools like Terraform or Azure Resource Manager (ARM) templates.
- Implement security best practices across the development and deployment lifecycle.
- Troubleshoot and resolve issues in development, test, and production environments.
- Collaborate with cross-functional teams to ensure seamless integration of software solutions.
- Stay updated on emerging technologies and trends in DevOps and cloud computing.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- years of experience in a DevOps or similar role.
- Proficiency with Azure services, including Azure DevOps, Azure Functions, Azure Kubernetes Service (AKS), and Azure Networking.
- Strong experience with CI/CD tools and practices.
- Familiarity with scripting languages such as PowerShell, Python, or Bash.
- Knowledge of containerization technologies (e.g., Docker, Kubernetes).
- Understanding of monitoring tools (e.g., Azure Monitor, Application Insights).
- Excellent problem-solving skills and attention to detail.
Preferred Qualifications:
- Azure certifications (e.g., Azure DevOps Engineer Expert, Azure Solutions Architect).
- Experience with Agile methodologies and collaboration tools (e.g., Jira, Confluence).
Benefits
- On-site opportunity to Singapore, Malaysia, Hongkong, Indonesia and Australia
- A supportive and inclusive environment that values teamwork and collaboration.
- Collaborate with skilled professionals who are passionate about technology and innovation.
Who We Are:
HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have 1000+ employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Website - https://www.gohighlevel.com/
YouTube Channel- https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g
Blog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/
Our Customers:
HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 450K million businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:
We work at scale; our infrastructure handles around 30 Billion+ API hits & 2 Billion+ message events monthly and over 25M views of customer pages daily. We also handle over 200 Terabytes of data across 5 Databases.
About the role:
We are seeking an experienced and dynamic Associate Director of Platform to lead our platform engineering team. This role will be responsible for overseeing the development, scaling, and maintenance of our platform infrastructure, with a specific focus on database management, data engineering, security, and observability. The ideal candidate will have a strong technical background, exceptional leadership skills, and a proven track record of managing high-performing engineering teams.
Role and Responsibilities :
Team Leadership & Management:
-Lead, mentor, and manage a team of platform engineers, fostering a culture of collaboration, innovation, and continuous improvement.
-Conduct regular one-on-ones, performance reviews, and career development sessions with team members.
-Provide technical guidance and support to the team, ensuring best practices in software development, system architecture, and operations.
-Oversee the design, development, and deployment of scalable and reliable platform solutions.
Data:
-Manage databases including MongoDB, ElasticSearch, Redis, and Firestore.
-Oversee backups, restores, disaster recovery, and security (VPC and VPN).
-Develop and maintain ETL pipelines, storage dumps, and data warehouses.
-Handle storage solutions such as GCloud Storage, S3, CDN (caching and invalidation), and manage data & storage archivals.
Compliance:
-Ensure compliance with SOC 2, GDPR, PCI DSS, HIPAA, and oversee MSSP, pen testing, and endpoint protection.
Platform Services:
-Design APIs and cloud functions, manage rate limiting, job scheduling, and load testing.
-Maintain dev environments, logging, base worker, events, sockets, and distributed tracing.
Security:
-Implement application security measures including IAM (AuthN & AuthZ), RBAC testing, CAPTCHA, and key chain management.
-Oversee DDOS protection, MSSP, SonarQube scans, runtime security, VPN, and dev device policy.
Observability:
-Set up alerting for all cloud resources, manage incident and on-call management, and ensure uptime monitoring.
-Utilize Prometheus, Grafana, and Mimir for observability.
CI/CD:
-Manage Jenkins, Helm Charts, and frontend architecture for continuous integration and deployment.
Backend Services:
-Oversee backend services including Auto SSL, domain reputation management, Kubernetes, and AppEngine.
Network Management:
-Manage load balancers, subnets, VPC, Istio Service Mesh, certificates, and cloud projects.
Project Management:
-Plan, prioritize, and manage multiple projects and initiatives, ensuring timely and efficient delivery.
-Collaborate with cross-functional teams to gather requirements, define project scopes, and set realistic timelines.
Quality & Performance:
-Establish and maintain high standards of code quality, performance, and reliability.
-Implement and monitor key performance indicators (KPIs) to track the success and health of the platform.
Budget & Resource Management:
-Manage the platform engineering budget, ensuring efficient allocation of resources.
-Identify and recruit top talent to expand the platform engineering team as needed.
Qualifications:
-Bachelor's degree or equivalent experience in Engineering or related field of study
-9+ years of experience in software engineering, with at least 3 years in a managerial role
-Proven experience in building and scaling platform infrastructure in a fast-paced environment
-Strong background in system architecture, cloud services (e.g., AWS, Azure, Google Cloud), and DevOps practices.
-Excellent leadership and team management skills, with a focus on mentorship and professional development.
-Strong problem-solving and analytical skills.
-Outstanding communication and interpersonal abilities.
EEO Statement:
At HighLevel, we value diversity. In fact, we understand it makes our organization stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
Location: Mohali (Work from office)
You can directly Walk-in for an Interview
Requirements :
- Good knowledge of Linux Ubuntu.
- Knowledge of general networking practices/protocols / administrative tasks.
- Adding, removing, or updating user account information, resetting passwords, etc.
- Scripting to ensure operations automation and data gathering is accomplished seamlessly.
- Ability to work cooperatively with software developers, testers, and database administrators.
- Experience with software version control system (GIT) and CI.
- Knowledge of Web server Apache, Nginx etc
- E-Mail servers based on Postfix and Dovecot.
- Understanding of docker and Kubernetes Containers.
- IT hardware, Linux Server, System Admin, Server administrator
Highlights:
- Working 5 days a week.
- Group Health Insurance for our employees.
- Work with a team of 300+ excellent engineers.
- Extra Compensation for Night Shifts.
- Additional Salary for an extra day spent in the office.
- Lunch buffets for all our employees.
- Fantastic Friday meals to dine in for employees.
- Yearly and quarterly awards with CASH amount, Birthday celebration, Dinner coupons etc.
- Team Dinners on Project Completion.
- Festival celebration, Month End celebration.
Open for both Day and Night shifts
IT hardware, Linux Server, System Admin, Docker, Co
Roles & Responsibilities:
· Hands on Experience of banking application with Java, Mule, AWS, API/Microservices, Grafana, Git, Oracle/MySQL.
· System understanding and hands on experience on Application Support.
· Perform all tests on production applications.
· Knowledge on Database, Kong, Linux, AWS Environments.
· Log tracing application like Grafana, Jaeger, Kibana, Dynatrace etc
· Knowledge in Scripting, process automation etc
· Deployment pipeline like Jenkins, GitLab.
· Good to have Kubernetes, Git.
· Design production support procedures, policies and documentation
· Identify and resolve technical issues
· Establish the root causes of production errors, and escalate serious concerns
· Prepare recovery procedures for all applications and provide upgrade to same
· Coordinate with IT Business Groups, other Application Tech Teams and external vendors and ensure effective application services to ensure reliability of all applications
· Gather information independently, carry out necessary research and provide an in-depth analysis to resolve production issues
· Supervise all alerts related to application and system procedures and provide services proactively
· Develop test scripts for new/changed production application capabilities
· Capture and share best- practice knowledge amongst the team
Educational Qualifications:
· Graduation: Bachelor of Science (B.Sc) / Bachelor of Technology (B.Tech) / Bachelor of Computer Applications (BCA)
· Post-Graduation: Master of Science (M.Sc) /Master of Technology (M.Tech) / Master of Computer Applications (MCA)
Hi,
We are looking for candidate with experience in DevSecOps.
Please find below JD for your reference.
Responsibilities:
Execute shell scripts for seamless automation and system management.
Implement infrastructure as code using Terraform for AWS, Kubernetes, Helm, kustomize, and kubectl.
Oversee AWS security groups, VPC configurations, and utilize Aviatrix for efficient network orchestration.
Contribute to Opentelemetry Collector for enhanced observability.
Implement microsegmentation using AWS native resources and Aviatrix for commercial routes.
Enforce policies through Open Policy Agent (OPA) integration.
Develop and maintain comprehensive runbooks for standard operating procedures.
Utilize packet tracing for network analysis and security optimization.
Apply OWASP tools and practices for robust web application security.
Integrate container vulnerability scanning tools seamlessly within CI/CD pipelines.
Define security requirements for source code repositories, binary repositories, and secrets managers in CI/CD pipelines.
Collaborate with software and platform engineers to infuse security principles into DevOps teams.
Regularly monitor and report project status to the management team.
Qualifications:
Proficient in shell scripting and automation.
Strong command of Terraform, AWS, Kubernetes, Helm, kustomize, and kubectl.
Deep understanding of AWS security practices, VPC configurations, and Aviatrix.
Familiarity with Opentelemetry for observability and OPA for policy enforcement.
Experience in packet tracing for network analysis.
Practical application of OWASP tools and web application security.
Integration of container vulnerability scanning tools within CI/CD pipelines.
Proven ability to define security requirements for source code repositories, binary repositories, and secrets managers in CI/CD pipelines.
Collaboration expertise with DevOps teams for security integration.
Regular monitoring and reporting capabilities.
Site Reliability Engineering experience.
Hands-on proficiency with source code management tools, especially Git.
Cloud platform expertise (AWS, Azure, or GCP) with hands-on experience in deploying and managing applications.
Please send across your updated profile.
Azure DevOps engineer should have a deep understanding of container principles and hands-on experience with Docker.
They should also be able to set-up and manage clusters using Azure Kubernetes Service (AKS). Additionally, understanding of API management, Azure Key-Vaults, ACR, networking concepts like virtual networks, subnets, NSG, route tables. Awareness of any one of the software like Apigee, Kong, or APIM in Azure is a must. Strong experience with IaC technologies like Terraform, ARM/ Bicep Templates, GitHub Pipelines, Sonar etc.
- Designing DevOps strategies: Recommending strategies for migrating and consolidating DevOps tools, designing an Agile work management approach, and creating a secure development process
- Implementing DevOps development processes: Designing version control strategies, integrating source control, and managing build infrastructure
- Managing application configuration and secrets: Ensuring system and infrastructure availability, stability, scalability, and performance
- Automating processes: Overseeing code releases and deployments with an emphasis on continuous integration and delivery
- Collaborating with teams: Working with architect and developers to ensure smooth code integration and collaborating with development and operations teams to define pipelines.
- Documentation: Producing detailed Development Architecture design, setting up the DevOps tools and working together with the CI/CD specialist in integrating the automated CI and CD pipelines with those tools
- Ensuring security and compliance/DevSecOps: Managing code quality and security policies
- Troubleshooting issues: Investigating issues and responding to customer queries
- Core Skills: Azure DevOps engineer should have a deep understanding of container principles and hands-on experience with Docker. They should also be able to set-up and manage clusters using Azure Kubernetes Service (AKS). Additionally, understanding of API management, Azure Key-Vaults, ACR, networking concepts like virtual networks, subnets, NSG, route tables. Awareness of any one of the software like Apigee, Kong, or APIM in Azure is a must. Strong experience with IaC technologies like Terraform, ARM/ Bicep Templates, GitHub Pipelines, Sonar,
- Additional Skills: Self-starter and ability to execute tasks on time, Excellent communication skills, ability to come up with multiple solutions for problems, interact with client-side experts to resolve issues by providing correct pointers, excellent debugging skills, ability to breakdown tasks into smaller steps.
Location : Work From Home PAN India (Hyderabad)
Timings : 4 PM IST – 1 AM IST (MUST)
Major Requirements
- 8+ years’ experience as a Site Reliability Engineer with a mix of data center and cloud operations experience.
- AWS Experience (+3 years within the last 5 years)
- Service Mesh experience either in supporting or creating the infrastructure in AWS. (Kong Service Mesh preferred)
- Kubernetes experience (+2 years in the last 4 years) AWS EKS Preferred.
- Experience with Helm charts for EKS configuration
- Experience with IAC and system configuration orchestration (Terraform and Ansible preferred)
- Use of Python with Ansible
- Experience with networking and networking configurations
- Experience with Linux including Ubuntu and Amazon Linux versions
Bonus Knowledge
- Kafka
- VMware
- .Net Framework Applications
- Microsoft Windows
We are seeking a Senior Backend Engineer to join our team.
Responsibilities:
- Be a key contributor to the design, implementation, testing, and documentation of our public APIs.
- Lead the launch and scaling of our product to support tens of partners and tens of millions of concurrent users.
- Assess and enhance the scalability of the database layer.
- Lead the design, strategic migration, and optimization of customer data.
- Ensure that backend systems and services operate smoothly in production by triaging and resolving operational issues.
- Be a champion for security best practices within the backend, to protect sensitive user data against emerging threats.
- Conduct code reviews and mentorship to elevate team capabilities and product quality.
- Help build a positive and inclusive work culture.
Requirements:
- BS in Computer Science or equivalent.
- 5 years of engineering experience.
- Experience with Golang, Redis, DynamoDB, PostgreSQL, S3, and Kubernetes.
- Experience shipping mature backend systems at high scale.
Benefits:
- Work Location: Remote
- 5 days working
You can apply directly through the link: https://zrec.in/OYyXu?source=CareerSite
Explore our Career Page for more such jobs : careers.infraveo.com
About Us:
Leoforce is at the forefront of revolutionizing HR tech with cutting-edge AI solutions. Our products streamline and enhance Recruiting/hiring processes, making recruiters more efficient, accurate, and insightful. We are looking for a highly skilled and experienced Technical Lead to join our dynamic team and drive the development of our
innovative backend systems.
Position Overview:
As a Technical Lead, you will be responsible for leading a team of talented developers, ensuring the successful delivery of high-quality backend services and solutions. Your expertise in .NET and backend technologies will be crucial in architecting, developing, and maintaining scalable and robust systems that power our AI-driven HR tech products.
Key Responsibilities:
Lead and mentor a team of backend developers, providing technical guidance and fostering a culture of collaboration and excellence.
Design, develop, and maintain backend services using .NET technologies, ensuring high performance, scalability, and reliability.
Exposure to frontend tech like ReactJS, NextJS, Flutter or Angular JS will be preferred
Collaborate with product managers, data scientists, and front-end developers to deliver seamless and efficient solutions.
Oversee the architecture and design of backend systems, ensuring alignment with best practices and company standards.
Implement and maintain CI/CD pipelines, ensuring smooth deployment and integration processes.
Conduct code reviews, enforce coding standards, and ensure high-quality code delivery.
Troubleshoot and resolve complex technical issues, providing timely and effective solutions.
Stay updated with the latest industry trends and technologies, incorporating relevant advancements into the development process.
Qualifications:
·
Bachelor’s or Master’s / bachelor’s degree in computer
science, Engineering, or a related field.
·
7+ years of experience in software development, with a
strong focus on backend technologies.
·
Extensive experience with .NET framework, including
C#, ASP.NET Core, and related technologies.
·
Proven track record of leading and managing
development teams in a fast-paced environment.
·
Strong understanding of database technologies (SQL
Server, SQL, Mongo DB, Elastic Search etc.) and ORM frameworks (Entity
Framework, Dapper, etc.).
·
Experience with microservices architecture, RESTful
APIs, and distributed systems.
·
Proficiency in cloud platforms (AWS, Azure, or Google
Cloud) and containerization technologies (Docker, Kubernetes).
·
Excellent problem-solving skills, with the ability to
think critically and innovatively.
·
Strong communication and interpersonal skills, with
the ability to collaborate effectively with cross-functional teams.
Preferred Qualifications:
·
Experience with AI/ML integration and data processing
pipelines.
·
Familiarity with front-end technologies (React,
Angular, etc.) and their integration with backend services.
·
Knowledge of DevOps practices and tools (Jenkins,
GitLab CI, etc.).
·
Certification in .NET or cloud technologies (e.g.,
Microsoft Certified: Azure Developer Associate).
What We Offer:
·
Competitive salary and benefits package.
·
Opportunity to work with cutting-edge AI technologies
in the HR tech space.
·
A collaborative and inclusive work environment.
·
Professional development and growth opportunities.
·
Flexible working hours and remote work options.
We are seeking a talented Senior DevOps Engineer to join our team. The ideal candidate will play a crucial role in enhancing our infrastructure and ensuring seamless deployment processes.
Responsibilities:
- Design, implement, and maintain automation scripts for build, deployment, and configuration management.
- Collaborate with software developers to optimize application performance.
- Manage cloud infrastructure and service.
- Implement best practices for security and compliance.
- Proficiency in NoSQL databases- Knowledge of server management and shell scripting.
- Experience in cloud computing environments.
- Familiarity with .NET framework is a plus.
- Migrate existing cloud infrastructure to Infrastructure as Code (IaC) via tools like Terraform.
- Build out and enhance scalable containerized infrastructure using Kubernetes (k8s) and EKS.
- Assist software engineers with migration of applications to leverage configuration management and configuration as code (CaC) using tools like Docker.
- Configure and optimize CI/CD pipelines to minimize lead time for changes, including pipelines for infrastructure changes.
- Ensure applications and infrastructure are properly instrumented via observability tooling to improve alerting, monitoring, and incident response.
- Recommend infrastructure improvements to ensure architectures are centred around customer needs, and improve overall cloud architecture goals around availability, scalability, reliability, security and costs, and metrics like MTTR, MTBF, etc.
- Automate existing manual workflows and implement controls around them in line with the company's security and compliance goals.
Requirements:
- 10+ years of experience in DevOps or SRE fields, with current experience supporting engineering teams implementing scalable cloud architectures using CaC, IaC, and Kubernetes.
- Strong proficiency w/ AWS (certification required), including expert-level knowledge of cloud networking and security best practices.
- Strong programming and/or scripting experience in languages like Python and bash, including extensive experience with source code management and deployment pipelines.
- Extensive experience leveraging observability tooling for logging, monitoring, alerting, incident management, and escalation.
- Exceptional debugging and troubleshooting ability.
- Familiarity with web application architecture concepts, such as databases, message queues, serverless
- Ability to work with Engineering teams to identify and resolve performance constraints.
- Experience managing cloud infrastructure for SaaS applications.
- Experience leading a major cloud migration (on-premise to cloud, poly-cloud, etc.) preferred.
- Experience enabling a continuous deployment capability in a previous role preferred.
- Demonstrated experience guiding and leading other DevOps, cloud, and software engineers, leveling up technical proficiency and overall cloud capabilities.
Benefits:
- Work Location: Remote
- 5 days working
You can apply directly through the link: https://zrec.in/U2vjo?source=CareerSite
Explore our Career Page for more such jobs : careers.infraveo.com
About HighLevel:
HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have 1000+ employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Website - https://www.gohighlevel.com/
YouTube Channel - https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g
Blog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/
Our Customers:
HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 450K million businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:
Work with scale, our infrastructure handles around 30 Billion+ API hits, 20 Billion+ message events, and more than 200 TeraBytes of data
About the role:
We are seeking an experienced Engineering Manager to lead our Generative AI teams. You will inspire and manage a team of engineers, driving performance and innovation. Your role will include developing robust AI-driven products and overseeing strategic project management.
Team-Specific Focus Areas:
Conversations AI
-Develop AI solutions for appointment booking, forms filling, sales, and intent recognition
-Ensure seamless integration and interaction with users through natural language processing and understanding
Workflows AI
-Create and optimize AI-powered workflows to automate and streamline business processes
Voice AI
-Focus on VOIP technology with an emphasis on low latency and high-quality voice interactions
-Fine-tune voice models for clarity, accuracy, and naturalness in various applications
Support AI
-Integrate AI solutions with FreshDesk and ClickUp to enhance customer support and ticketing systems
-Develop tools for automated response generation, issue resolution, and workflow management
Platform AI
-Oversee AI training, billing, content generation, funnels, image processing, and model evaluations
-Ensure scalable and efficient AI models that meet diverse platform needs and user demands
Requirements:
- Expertise with large scale Conversation Agents along with Response Evaluations
- Extensive hands-on experience with Node.Js and Vue.js (or React/Angular)
- Experience with scaling the services to at least 200k+ MAUs
- Bachelor's degree or equivalent experience in Engineering or related field of study
- 5+ years of engineering experience with 1+ years of management experience
- Strong people, communication, and problem-solving skills
Responsibilities:
- Mentor and coach individuals on the team
- Perform evaluations and give feedback to help them progress
- Planning for fast and flexible delivery by breaking down into milestones
- Increase efficiency by patterns, frameworks and processes
- Improving product and engineering quality
- Help drive product strategy
- Design and plan open-ended architecture that are flexible for evolving business needs
EEO Statement:
At HighLevel, we value diversity. In fact, we understand it makes our organisation stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.
Company: CorpCare
Title: Head of Engineering/ Head of Product
Location: Mumbai (work from office)
CTC: Annual CTC Up to 25 Lacs
About Us:
CorpCare is India’s first all-in-one corporate funds and assets management platform. We offer a single-window solution for corporates, family offices, and HNIs. We assist corporates in formulating and managing treasury management policies and conducting reviews with investment committees and the board.
Job Summary:
The Head of Engineering will be responsible for overseeing the development, implementation, and management of our corporate funds and assets management platform. This role demands a deep understanding of the broking industry/Financial services industry, software engineering, and product management. The ideal candidate will have a robust background in engineering leadership, a proven track record of delivering scalable technology solutions, and strong product knowledge.
Key Responsibilities:
- Develop and communicate a clear engineering vision and strategy aligned with our broking and funds management platform.
- Conduct market research and technical analysis to identify trends, opportunities, and customer needs within the broking industry.
- Define and prioritize the engineering roadmap, ensuring alignment with business goals and customer requirements.
- Lead cross-functional engineering teams (software development, QA, DevOps, etc.) to deliver high-quality products on time and within budget.
- Oversee the entire software development lifecycle, from planning and architecture to development and deployment, ensuring robust and scalable solutions.
- Write detailed technical specifications and guide the engineering teams to ensure clarity and successful execution.
- Leverage your understanding of the broking industry to guide product development and engineering efforts.
- Collaborate with product managers to incorporate industry-specific requirements and ensure the platform meets the needs of brokers, traders, and financial institutions.
- Stay updated with regulatory changes, market trends, and technological advancements within the broking sector.
- Mentor and lead a high-performing engineering team, fostering a culture of innovation, collaboration, and continuous improvement.
- Recruit, train, and retain top engineering talent to build a world-class development team.
- Conduct regular performance reviews and provide constructive feedback to team members.
- Define and track key performance indicators (KPIs) for engineering projects to ensure successful delivery and performance.
- Analyze system performance, user data, and platform metrics to identify areas for improvement and optimization.
- Prepare and present engineering performance reports to senior management and stakeholders.
- Work closely with product managers, sales, marketing, and customer support teams to align engineering efforts with overall business objectives.
- Provide technical guidance and support to sales teams to help them understand the platform's capabilities and competitive advantages.
- Engage with customers, partners, and stakeholders to gather feedback, understand their needs, and validate engineering solutions.
Requirements:
- BE /B. Tech - Computer Science
- MBA a plus, not required
- 5+ years of experience in software engineering, with at least 2+ years in a leadership role.
- Strong understanding of the broking industry and financial services industry.
- Proven track record of successfully managing and delivering complex software products.
- Excellent communication, presentation, and interpersonal skills.
- Strong analytical and problem-solving abilities.
- Experience with Agile/Scrum methodologies.
- Deep understanding of software architecture, cloud computing, and modern development practices.
Technical Expertise:
- Front-End: React, Next.js, JavaScript, HTML5, CSS3
- Back-End: Node.js, Express.js, RESTful APIs
- Database: MySQL, PostgreSQL, MongoDB
- DevOps: Docker, Kubernetes, AWS (EC2, S3, RDS), CI/CD pipelines
- Version Control: Git, GitHub/GitLab
- Other: TypeScript, Webpack, Babel, ESLint, Redux
Preferred Qualifications:
- Experience in the broking or financial services industry.
- Familiarity with data analytics tools and methodologies.
- Knowledge of user experience (UX) design principles.
- Experience with trading platforms or financial technology products.
This role is ideal for someone who combines strong technical expertise with a deep understanding of the broking industry and a passion for delivering high-impact software solutions.
the forefront of innovation in the digital video industry
Responsibilities:
- Work with development teams and product managers to ideate software solutions
- Design client-side and server-side architecture
- Creating a well-informed cloud strategy and managing the adaptation process
- Evaluating cloud applications, hardware, and software
- Develop and manage well-functioning databases and applications Write effective APIs
- Participate in the entire application lifecycle, focusing on coding and debugging
- Write clean code to develop, maintain and manage functional web applications
- Get feedback from, and build solutions for, users and customers
- Participate in requirements, design, and code reviews
- Engage with customers to understand and solve their issues
- Collaborate with remote team on implementing new requirements and solving customer problems
- Focus on quality of deliverables with high accountability and commitment to program objectives
Required Skills:
- 7– 10 years of SW development experience
- Experience using Amazon Web Services (AWS), Microsoft Azure, Google Cloud, or other major cloud computing services.
- Strong skills in Containers, Kubernetes, Helm
- Proficiency in C#, .NET, PHP /Java technologies with an acumen for code analysis, debugging and problem solving
- Strong skills in Database Design(PostgreSQL or MySQL)
- Experience in Caching and message Queue
- Experience in REST API framework design
- Strong focus on high-quality and maintainable code
- Understanding of multithreading, memory management, object-oriented programming
Preferred skills:
- Experience in working with Linux OS
- Experience in Core Java programming
- Experience in working with JSP/Servlets, Struts, Spring / Spring Boot, Hibernate
- Experience in working with web technologies HTML,CSS
- Knowledge of source versioning tools particularly JIRA, Git, Stash, and Jenkins.
- Domain Knowledge of Video, Audio Codecs
at Connect IO
RedHat OpenShift (L2/L3 Expetise)
1. Setup OpenShift Ingress Controller (And Deploy Multiple Ingress)
2. Setup OpenShift Image Registry
3. Very good knowledge of OpenShift Management Console to help the application teams to manage their pods and troubleshooting.
4. Expertise in deployment of artifacts to OpenShift cluster and configure customized scaling capabilities
5. Knowledge of Logging of PODS in OpenShift Cluster for troubleshooting.
2. Architect:
- Suggestions on architecture setup
- Validate architecture and let us know pros and cons and feasibility.
- Managing of Multi Location Sharded Architecture
- Multi Region Sharding setup
3. Application DBA:
- Validate and help with Sharding decisions at collection level
- Providing deep analysis on performance by looking at execution plans
- Index Suggestions
- Archival Suggestions and Options
4. Collaboration
Ability to plan and delegate work by providing specific instructions.
at Scoutflo
Scoutflo is a platform that automates complex infrastructure requirements for Kubernetes Infrastructure.
Job Description:
- In-depth knowledge of full-stack development principles and best practices.
- Expertise in building web applications with strong proficiency in languages like
- Node.js, React, and Go.
- Experience developing and consuming RESTful & gRPC API Protocols.
- Familiarity with CI/CD workflows and DevOps processes.
- Solid understanding of cloud platforms and container orchestration
- technologies
- Experience with Kubernetes pipelines and workflows using tools like Argo CD.
- Experience with designing and building user-friendly interfaces.
- Excellent understanding of distributed systems, databases, and APIs.
- A passion for writing clean, maintainable, and well-documented code.
- Strong problem-solving skills and the ability to work independently as well as
- collaboratively.
- Excellent communication and interpersonal skills.
- Experience with building self-serve platforms or user onboarding experiences.
- Familiarity with Infrastructure as Code (IaC) tools like Terraform.
- A strong understanding of security best practices for Kubernetes deployments.
- Grasp on setting up Network Architecture for distributed systems.
Must have:
1) Experience with managing Infrastructure on AWS/GCP or Azure
2) Managed Infrastructure on Kubernetes
Role & Responsiblities
- DevOps Engineer will be working with implementation and management of DevOps tools and technologies.
- Create and support advanced pipelines using Gitlab.
- Create and support advanced container and serverless environments.
- Deploy Cloud infrastructure using Terraform and cloud formation templates.
- Implement deployments to OpenShift Container Platform, Amazon ECS and EKS
- Troubleshoot containerized builds and deployments
- Implement processes and automations for migrating between OpenShift, AKS and EKS
- Implement CI/CD automations.
Required Skillsets
- 3-5 years of cloud-based architecture software engineering experience.
- Deep understanding of Kubernetes and its architecture.
- Mastery of cloud security engineering tools, techniques, and procedures.
- Experience with AWS services such as Amazon S3, EKS, ECS, DynamoDB, AWS Lambda, API Gateway, etc.
- Experience with designing and supporting infrastructure via Infrastructure-as-Code in AWS, via CDK, CloudFormation Templates, Terraform or other toolset.
- Experienced with tools like Jenkins, Github, Puppet or other similar toolset.
- Experienced with monitoring functions like cloudwatch, newrelic, graphana, splunk, etc,
- Excellence in verbal and written communication, and in working collaboratively with a variety of colleagues and clients in a remote development environment.
- Proven track record in cloud computing systems and enterprise architecture and security
Technical Leadership: Lead, mentor, and inspire a team of engineers to deliver high-
quality software solutions. Provide technical guidance and support to team members,ensuring adherence to coding standards and best practices.
• Full Stack Development: Hands-on coding in ReactJS, Golang, or Java to contribute
directly to project deliverables. Lead by example, demonstrating best practices in coding,
design, and testing.
• Micro services and Cloud-Native Architecture: Design, implement, and maintain
microservices architecture for scalable and resilient applications. Leverage cloud-native
technologies and principles to build robust and efficient systems.
• Squad Management: Independently manage engineering squads, ensuring effective
collaboration and delivery of project goals. Foster a positive and collaborative team
culture, encouraging innovation and continuous improvement.
• Cross-Functional Collaboration: Collaborate with product managers, UX/UI designers,
and other stakeholders to define project requirements and priorities. Ensure alignment
between technical solutions and business objectives.
• Technology Stack Expertise: Stay updated on industry trends and emerging
technologies. Evaluate and introduce new technologies/tools to enhance the
development process.
at DeepIntent
With a core belief that advertising technology can measurably improve the lives of patients, DeepIntent is leading the healthcare advertising industry into the future. Built purposefully for the healthcare industry, the DeepIntent Healthcare Advertising Platform is proven to drive higher audience quality and script performance with patented technology and the industry’s most comprehensive health data. DeepIntent is trusted by 600+ pharmaceutical brands and all the leading healthcare agencies to reach the most relevant healthcare provider and patient audiences across all channels and devices. For more information, visit DeepIntent.com or find us on LinkedIn.
We are seeking a skilled and experienced Site Reliability Engineer (SRE) to join our dynamic team. The ideal candidate will have a minimum of 3 years of hands-on experience in managing and maintaining production systems, with a focus on reliability, scalability, and performance. As an SRE at Deepintent, you will play a crucial role in ensuring the stability and efficiency of our infrastructure, as well as contributing to the development of automation and monitoring tools.
Responsibilities:
- Deploy, configure, and maintain Kubernetes clusters for our microservices architecture.
- Utilize Git and Helm for version control and deployment management.
- Implement and manage monitoring solutions using Prometheus and Grafana.
- Work on continuous integration and continuous deployment (CI/CD) pipelines.
- Containerize applications using Docker and manage orchestration.
- Manage and optimize AWS services, including but not limited to EC2, S3, RDS, and AWS CDN.
- Maintain and optimize MySQL databases, Airflow, and Redis instances.
- Write automation scripts in Bash or Python for system administration tasks.
- Perform Linux administration tasks and troubleshoot system issues.
- Utilize Ansible and Terraform for configuration management and infrastructure as code.
- Demonstrate knowledge of networking and load-balancing principles.
- Collaborate with development teams to ensure applications meet reliability and performance standards.
Additional Skills (Good to Know):
- Familiarity with ClickHouse and Druid for data storage and analytics.
- Experience with Jenkins for continuous integration.
- Basic understanding of Google Cloud Platform (GCP) and data center operations.
Qualifications:
- Minimum 3 years of experience in a Site Reliability Engineer role or similar.
- Proven experience with Kubernetes, Git, Helm, Prometheus, Grafana, CI/CD, Docker, and microservices architecture.
- Strong knowledge of AWS services, MySQL, Airflow, Redis, AWS CDN.
- Proficient in scripting languages such as Bash or Python.
- Hands-on experience with Linux administration.
- Familiarity with Ansible and Terraform for infrastructure management.
- Understanding of networking principles and load balancing.
Education:
Bachelor's degree in Computer Science, Information Technology, or a related field.
DeepIntent is committed to bringing together individuals from different backgrounds and perspectives. We strive to create an inclusive environment where everyone can thrive, feel a sense of belonging, and do great work together.
DeepIntent is an Equal Opportunity Employer, providing equal employment and advancement opportunities to all individuals. We recruit, hire and promote into all job levels the most qualified applicants without regard to race, color, creed, national origin, religion, sex (including pregnancy, childbirth and related medical conditions), parental status, age, disability, genetic information, citizenship status, veteran status, gender identity or expression, transgender status, sexual orientation, marital, family or partnership status, political affiliation or activities, military service, immigration status, or any other status protected under applicable federal, state and local laws. If you have a disability or special need that requires accommodation, please let us know in advance.
DeepIntent’s commitment to providing equal employment opportunities extends to all aspects of employment, including job assignment, compensation, discipline and access to benefits and training.
at Freestone Infotech Pvt. Ltd.
Core Experience:
•Experience in Core Java, J2EE, Spring/Spring Boot, Hibernate, Spring REST, Linux, JUnit, Maven, Design Patterns.
• Sound knowledge of RDBMS like MySQL/Postgres, including schema design.
• Exposure to Linux environment.
• Exposure to Docker and Kubernetes.
• Basic Knowledge of Cloud Services of AWS, Azure, GCP cloud provider.
• Proficient in general programming, logic, problem solving, data structures & algorithms
• Good analytical, grasping and problem-solving skills.
Secondary Skills:
• Agile / Scrum Development Experience preferred.
• Comfortable working with a microservices architecture and familiarly with NoSql solutions.
• Experience in Test Driven Development.
• Excellent written and verbal communication skills.
• Hands-on skills in configuration of popular build tools, like Maven and Gradle
• Good knowledge of testing frameworks such as JUnit.
• Good knowledge of coding standards, source code organization and packaging/deploying.
• Good knowledge of current and emerging technologies and trends.
Job Responsibilities:
• Design, Development and Delivery of Java based enterprise-grade applications.
• Ensure best practices, quality and consistency within various design and development phases.
• Develop, test, implement and maintain application software working with established processes.
Work with QA and help them for test automation
• Work with Technical Writer and help them documenting the features you have developing.
Education and Experience:
• Bachelor’s / master’s degree in computer science or information technology or related field
About Us:
At Product Fusion, we are dedicated to building innovative and scalable software solutions. Our team is passionate about leveraging cutting-edge technologies to drive product excellence and create impactful digital experiences. We invite you to join our dynamic team and contribute to our mission of technological innovation.
Job Description:
We are seeking a talented and motivated Full-Stack Developer to join our team. The ideal candidate will have a strong background in both front-end and back-end development, with proficiency in modern web technologies and frameworks. You will work closely with our development team to design, develop, and deploy scalable web applications.
Requirements:
- Proven experience as a Full-Stack Developer or similar role
- Strong proficiency in React JS or Next JS
- Solid understanding of Python and frameworks like Django or Fast API or Flask
- Proficiency in Tailwind CSS for front-end development
- Experience with PostgreSQL or MySQL databases
- Familiarity with Kubernetes for container orchestration
- Excellent problem-solving skills and attention to detail
- Strong communication and teamwork abilities
What We Offer:
- Competitive salary and benefits
- Flexible working hours and remote work options
- Opportunity to work with cutting-edge technologies
- Collaborative and innovative work environment
- Professional development and growth opportunities
Company Description
Miko is an advanced artificial intelligence innovation lab with a mission to bring AI and robotics to every consumer home. Headquartered in Mumbai, India, our workforce spans several countries, including the
United States, Canada, Europe, and the Middle East. To check out various product offerings, visit Miko's website.
Position Overview:
We seek a highly skilled and experienced Senior/ Lead Software Engineer to join our innovative team. The ideal candidate will have a strong background in Java development and be proficient in various backend technologies and frameworks. The role involves designing, developing, and maintaining high-performance, scalable backend systems. The candidate should be comfortable working in a Linux environment and have hands-on experience with both SQL and NoSQL
databases, as well as modern containerization and orchestration tools.
Key Responsibilities:
• Design, develop, and maintain backend services using Java, Spring Boot, and Vert.x.
• Implement and manage database solutions using SQL and NoSQL databases.
• Work with Hibernate for ORM (Object-Relational Mapping).
• Develop and manage caching mechanisms with Redis.
• Implement messaging and streaming solutions using Kafka.
• Utilize Docker for containerization and Kubernetes for orchestration.
• Perform system designing to ensure high availability, scalability, and reliability of applications.
• Design and develop microservices or monolithic architectures based on project requirements.
• Collaborate with front-end developers and other team members to establish objectives and design more functional, cohesive code to enhance the user experience.
• Write clean, scalable code using Java programming languages.
• Revise, update, and debug code.
• Improve existing software.
• Develop documentation throughout the software development life cycle (SDLC).
• Serve as an expert on applications and provide technical support.
Mandatory Skills and Qualifications:
• Proven experience as a Java Backend Developer.
• Strong expertise in Java, Spring Boot, and Vert.x.
• Proficient in using Hibernate for ORM.
• Extensive experience with Linux operating systems.
• Hands-on experience with Git version control system.
• Solid understanding of SQL and NoSQL databases.
• Experience with Redis for caching.
• Practical knowledge of Kafka for messaging and streaming.
• Proficiency with Docker for containerization and Kubernetes for orchestration.
• Strong understanding of system designing principles.
• Experience with microservices or monolithic architecture.
• Excellent problem-solving skills and attention to detail.
• Ability to work independently and as part of a team.
• Strong communication skills.
Preferred Skills:
• Familiarity with CI/CD pipelines.
• Knowledge of cloud platforms (AWS, Azure, GCP).
• Understanding of network protocols and security.
Educational Qualifications:
• Bachelor's or Master’s degree in Computer Science, Information Technology, or a related field.
Why Miko?
• Cutting-Edge Technology: Work with the latest in AI, robotics, and software development.
• Dynamic Work Environment: Collaborative and inclusive culture encouraging creativity and innovation.
• Career Growth: Opportunities for continuous learning, mentorship, and professional advancement.
• Impactful Work: Contribute to products that enhance the learning and play experiences of children worldwide.
• Global Reach: Be part of a brand that has a significant international presence.
• Innovative Products: Develop revolutionary products like the Miko robot.
• Supportive Culture: Enjoy a diverse, inclusive, and well-balanced work-life environment.
• Competitive Compensation: Receive competitive salaries and benefits packages.
• Entrepreneurial Spirit: Bring your ideas to life in a company that values initiative.
• Community Engagement: Participate in outreach programs and initiatives that give back to the community.
Devops Engineer(Permanent)
Experience: 8 to 12 yrs
Location: Remote for 2-3 months (Any Mastek Location- Chennai/Mumbai/Pune/Noida/Gurgaon/Ahmedabad/Bangalore)
Max Salary = 28 LPA (including 10% variable)
Notice Period: Immediate/ max 10days
Mandatory Skills: Either Splunk/Datadog, Gitlab, Retail Domain
· Bachelor’s degree in Computer Science/Information Technology, or in a related technical field or equivalent technology experience.
· 10+ years’ experience in software development
· 8+ years of experience in DevOps
· Mandatory Skills: Either Splunk/Datadog,Gitalb,EKS,Retail domain experience
· Experience with the following Cloud Native tools: Git, Jenkins, Grafana, Prometheus, Ansible, Artifactory, Vault, Splunk, Consul, Terraform, Kubernetes
· Working knowledge of Containers, i.e., Docker Kubernetes, ideally with experience transitioning an organization through its adoption
· Demonstrable experience with configuration, orchestration, and automation tools such as Jenkins, Puppet, Ansible, Maven, and Ant to provide full stack integration
· Strong working knowledge of enterprise platforms, tools and principles including Web Services, Load Balancers, Shell Scripting, Authentication, IT Security, and Performance Tuning
· Demonstrated understanding of system resiliency, redundancy, failovers, and disaster recovery
· Experience working with a variety of vendor APIs including cloud, physical and logical infrastructure devices
· Strong working knowledge of Cloud offerings & Cloud DevOps Services (EC2, ECS, IAM, Lambda, Cloud services, AWS CodeBuild, CodeDeploy, Code Pipeline etc or Azure DevOps, API management, PaaS)
· Experience managing and deploying Infrastructure as Code, using tools like Terraform Helm charts etc.
· Manage and maintain standards for Devops tools used by the team
Staff DevOps Engineer with Azure
EGNYTE YOUR CAREER. SPARK YOUR PASSION.
Egnyte is a place where we spark opportunities for amazing people. We believe that every role has meaning, and every Egnyter should be respected. With 22,000+ customers worldwide and growing, you can make an impact by protecting their valuable data. When joining Egnyte, you’re not just landing a new career, you become part of a team of Egnyters that are doers, thinkers, and collaborators who embrace and live by our values:
Invested Relationships
Fiscal Prudence
Candid Conversations
ABOUT EGNYTE
Egnyte is the secure multi-cloud platform for content security and governance that enables organizations to better protect and collaborate on their most valuable content. Established in 2008, Egnyte has democratized cloud content security for more than 22,000 organizations, helping customers improve data security, maintain compliance, prevent and detect ransomware threats, and boost employee productivity on any app, any cloud, anywhere. For more information, visit www.egnyte.com.
Our Production Engineering team enables Egnyte to provide customers access to their data 24/7 by providing best in class infrastructure.
ABOUT THE ROLE
We store multibillion files and multiple petabytes of data. We observe more than 11K API requests per second on average. To make that possible and to provide the best possible experience, we rely on great engineers. For us, people who own their work, from start to finish, are integral. Our engineers are part of the process from design to code, to test, to deployment and back again for further iterations. You can, and will, touch every level of the infrastructure depending on the day and what project you are working on. The ideal candidate should be able to take a complex problem and execute end to end. Mentor and set higher standards for the rest of the team and for the new hires.
WHAT YOU’LL DO:
• Design, build and maintain self-hosted and cloud environments to serve our own applications and services.
• Collaborate with software developers to build stable, scalable and high-performance solutions.
• Taking part in big projects like migrating solutions from self-hosted environments to the cloud, from virtual machines to Kubernetes, from monolith to microservices.
- Proactively make our organization and technology better!
- Advising others as to how DevOps can make a positive impact on their work.
• Share knowledge, mentor more junior team members while also still learning and gaining new skills.
- Maintain consistently high standards of communication, productivity, and teamwork across all teams.
YOUR QUALIFICATIONS:
• 5+ years of proven experience in a DevOps Engineer, System Administrator or Developer role, working on infrastructure or build processes.
• Expert knowledge of Microsoft Azure.
• Programming prowess (Python, Golang).
• Knowledge and experience about deployment and maintenance of Java and Python apps using application and web servers (Tomcat, Nginx, etc.).
• Ability to solve complex problems with simple, elegant and clean code.
• Practical knowledge of CI/CD solutions, GitLab CI or similar.
• Practical knowledge of Docker as a tool for testing and building an environment.
• Knowledge of Kubernetes and related technologies.
• Experience with metric-based monitoring solutions.
• Solid English skills to effectively communicate with other team members.
• Good understanding of the Linux Operating System on the administration level.
• Drive to grow as a DevOps Engineer (we value open-mindedness and a can-do attitude).
• Strong sense of ownership and ability to drive big projects.
BONUS SKILLS:
• Work experience as a Microsoft Azure architect.
• Experience in Cloud migrations projects.
• Leadership skills and experience.
COMMITMENT TO DIVERSITY, EQUITY, AND INCLUSION:
At Egnyte, we celebrate our differences and thrive on our diversity for our employees, our products, our customers, our investors, and our communities. Egnyters are encouraged to bring their whole selves to work and to appreciate the many differences that collectively make Egnyte a higher-performing company and a great place to be.
at PortOne
PortOne is re−imagining payments in Korea and other international markets. We are a Series B funded startup backed by prominent VC firms Softbank and Hanwa Capital
PortOne provides a unified API for merchants to integrate with and manage all of the payment options available in Korea and SEA Markets - Thailand, Singapore, Indonesia etc. It's currently used by 2000+ companies and processing multi-billion dollars in annualized volume. We are building a team to take this product to international markets, and looking for engineers with a passion for fintech and digital payments.
Culture and Values at PortOne
- You will be joining a team that stands for Making a difference.
- You will be joining a culture that identifies more with Sports Teams rather than a 9 to 5 workplace.
- This will be remote role that allows you flexibility to save time on commute
- Your will have peers who are/have
- Highly Self Driven with A sense of purpose
- High Energy Levels - Building stuff is your sport
- Ownership - Solve customer problems end to end - Customer is your Boss
- Hunger to learn - Highly motivated to keep developing new tech skill sets
Who you are ?
* You are an athlete and Devops/DevSecOps is your sport.
* Your passion drives you to learn and build stuff and not because your manager tells you to.
* Your work ethic is that of an athlete preparing for your next marathon. Your sport drives you and you like being in the zone.
* You are NOT a clockwatcher renting out your time, and NOT have an attitude of "I will do only what is asked for"
* Enjoys solving problems and delight users both internally and externally
* Take pride in working on projects to successful completion involving a wide variety of technologies and systems
* Posses strong & effective communication skills and the ability to present complex ideas in a clear & concise way
* Responsible, self-directed, forward thinker, and operates with focus, discipline and minimal supervision
* A team player with a strong work ethic
Experience
* 2+ year of experience working as a Devops/DevSecOps Engineer
* BE in Computer Science or equivalent combination of technical education and work experience
* Must have actively managed infrastructure components & devops for high quality and high scale products
* Proficient knowledge and experience on infra concepts - Networking/Load Balancing/High Availability
* Experience on designing and configuring infra in cloud service providers - AWS / GCP / AZURE
* Knowledge on Secure Infrastructure practices and designs
* Experience with DevOps, DevSecOps, Release Engineering, and Automation
* Experience with Agile development incorporating TDD / CI / CD practices
Hands on Skills
* Proficient in atleast one high level Programming Language: Go / Java / C
* Proficient in scripting - bash scripting etc - to build/glue together devops/datapipeline workflows
* Proficient in Cloud Services - AWS / GCP / AZURE
* Hands on experience on CI/CD & relevant tools - Jenkins / Travis / Gitops / SonarQube / JUnit / Mock frameworks
* Hands on experience on Kubenetes ecosystem & container based deployments - Kubernetes / Docker / Helm Charts / Vault / Packer / lstio / Flyway
* Hands on experience on Infra as code frameworks - Terraform / Crossplane / Ansible
* Version Control & Code Quality: Git / Github / Bitbucket / SonarQube
* Experience on Monitoring Tools: Elasticsearch / Logstash / Kibana / Prometheus / Grafana / Datadog / Nagios
* Experience with RDBMS Databases & Caching services: Postgres / MySql / Redis / CDN
* Experience with Data Pipelines/Worflow tools: Airflow / Kafka / Flink / Pub-Sub
* DevSecOps - Cloud Security Assessment, Best Practices & Automation
* DevSecOps - Vulnerabiltiy Assessments/Penetration Testing for Web, Network and Mobile applications
* Preferrable to have Devops/Infra Experience for products in Payments/Fintech domain - Payment Gateways/Bank integrations etc
What will you do ?
Devops
* Provisioning the infrastructure using Crossplane/Terraform/Cloudformation scripts.
* Creating and Managing the AWS EC2, RDS, EKS, S3, VPC, KMS and IAM services, EKS clusters & RDS Databases.
* Monitor the infra to prevent outages/downtimes and honor our infra SLAs
* Deploy and manage new infra components.
* Update and Migrate the clusters and services.
* Reducing the cloud cost by enabling/scheduling for less utilized instances.
* Collaborate with stakeholders across the organization such as experts in - product, design, engineering
* Uphold best practices in Devops/DevSecOps and Infra management with attention to security best practices
DevSecOps
* Cloud Security Assessment & Automation
* Modify existing infra to adhere to security best practices
* Perform Threat Modelling of Web/Mobile applications
* Integrate security testing tools (SAST, DAST) in to CI/CD pipelines
* Incident management and remediation - Monitoring security incidents, recovery from and remediation of the issues
* Perform frequent Vulnerabiltiy Assessments/Penetration Testing for Web, Network and Mobile applications
* Ensure the environment is compliant to CIS, NIST, PCI etc.
Here are examples of apps/features you will be supporting as a Devops/DevSecOps Engineer
* Intuitive, easy-to-use APIs for payment process.
* Integrations with local payment gateways in international markets.
* Dashboard to manage gateways and transactions.
* Analytics platform to provide insights
· IMMEDIATE JOINER
Professional Experience with 5+ years in Confluent Kafka Admin
· Demonstrated experience design / development.
· Must have proven knowledge and practical application of – Confluent Kafka (Producers/ Consumers / Kafka Connectors / Kafka Stream/ksqlDB/Schema Registry)
· Experience in performance optimization of consumers, producers.
· Good experience debugging issues related offset, consumer lag, partitions.
· Experience with Administrative tasks on Confluent Kafka.
· Kafka admin experience including but not limited to setup new Kafka cluster, create topics, grant permissions, offset reset, purge data, setup connectors, setup replicator task, troubleshooting issues, Monitor Kafka cluster health and performance, backup and recovery.
· Experience in implementing security measures for Kafka clusters, including access controls and encryption, to protect sensitive data.
· Install/Upgrade Kafka cluster techniques.
· Good experience with writing unit tests using Junit and Mockito
· Have experience with working in client facing project.
· Exposure to any cloud environment like AZURE is added advantage.
· Experience in developing or working on REST Microservices
Experience in Java, Springboot is a plus
Job Description: React Native Developer
Experience: Over 4 years
Responsibilities:
- Architect, design, develop, and maintain complex, scalable React Native applications using clean code principles.
- Collaborate with designers to translate UI/UX mock-ups into pixel-perfect, native-feeling mobile interfaces.
- Leverage React Native's capabilities to build reusable UI components and implement performant animations.
- Effectively utilize native modules and APIs to achieve platform-specific functionalities when necessary.
- Write unit and integration tests to ensure code quality and maintainability.
- Identify and troubleshoot bugs, diagnose performance bottlenecks, and implement optimizations.
- Stay up to date with the latest trends and advancements in the React Native ecosystem.
- Participate in code reviews, provide mentorship to junior developers, and foster a collaborative development environment.
Qualifications:
- Experience in professional software development with a strong focus on mobile development.
- Proven experience building production ready React Native applications.
- In-depth knowledge of React, JavaScript (ES6+), and related web technologies (HTML, CSS).
- Strong understanding of mobile development concepts and best practices.
- Experience with Redux or similar state management libraries for complex applications.
- Experience with unit testing frameworks (Jest, Mocha) and UI testing tools.
- Excellent communication, collaboration, and problem-solving skills.
- Ability to work independently and manage multiple tasks effectively.
- A passion for building high-quality, user-centric mobile applications.
Nice To Have:
- Experience with native development (iOS/Android) for deep integrations.
- Experience with containerization technologies (Docker, Kubernetes).
- Experience with continuous integration/continuous delivery (CI/CD) pipelines.
- Experience with GraphQL or RESTful APIs.
Key Responsibilities:
- Develop and Maintain CI/CD Pipelines: Design, implement, and manage CI/CD pipelines using GitOps practices.
- Kubernetes Management: Deploy, manage, and troubleshoot Kubernetes clusters to ensure high availability and scalability of applications.
- Cloud Infrastructure: Design, deploy, and manage cloud infrastructure on AWS, utilizing services such as EC2, S3, RDS, Lambda, and others.
- Infrastructure as Code: Implement and manage infrastructure using IaC tools like Terraform, CloudFormation, or similar.
- Monitoring and Logging: Set up and manage monitoring, logging, and alerting systems to ensure the health and performance of the infrastructure.
- Automation: Identify and automate repetitive tasks to improve efficiency and reliability.
- Security: Implement security best practices and ensure compliance with industry standards.
- Collaboration: Work closely with development, QA, and operations teams to ensure seamless integration and delivery of products.
Required Skills and Qualifications:
- Experience: 2-5 years of experience in a DevOps role.
- AWS: In-depth knowledge of AWS services and solutions.
- CI/CD Tools: Experience with CI/CD tools such as Jenkins, GitLab CI, CircleCI, or similar.
- GitOps Expertise: Proficient in GitOps methodologies and tools.
- Kubernetes: Strong hands-on experience with Kubernetes and container orchestration.
- Scripting and Automation: Proficient in scripting languages such as Bash, Python, or similar.
- Infrastructure as Code (IaC): Hands-on experience with IaC tools like Terraform, CloudFormation, or similar.
- Monitoring Tools: Familiarity with monitoring and logging tools like Prometheus, Grafana, ELK stack, or similar.
- Version Control: Strong understanding of version control systems, primarily Git.
- Problem-Solving: Excellent problem-solving and debugging skills.
- Collaboration: Ability to work in a fast-paced, collaborative environment.
- Education: Bachelor’s or master’s degree in computer science or a related field.
Job Description:
We are seeking a motivated DevOps intern to join our team. The intern will be responsible for deploying and maintaining applications in AWS and Azure cloud environments, as well as on client local machines when required. The intern will troubleshoot any deployment issues and ensure the high availability of the applications.
Responsibilities:
- Deploy and maintain applications in AWS and Azure cloud environments
- Deploy applications on client local machines when needed
- Troubleshoot deployment issues and ensure high availability of applications
- Collaborate with development teams to improve deployment processes
- Monitor system performance and implement optimizations
- Implement and maintain CI/CD pipelines
- Assist in implementing security best practices
Requirements:
- Currently pursuing a degree in Computer Science, Engineering, or related field
- Knowledge of cloud computing platforms (AWS, Azure)
- Familiarity with containerization technologies (Docker, Kubernetes)
- Basic understanding of networking principles
- Strong problem-solving skills
- Excellent communication skills
Nice to Have:
- Familiarity with configuration management tools (e.g., Ansible, Chef, Puppet)
- Familiarity with monitoring and logging tools (e.g., Prometheus, ELK stack)
- Understanding of security best practices in cloud environments
Benefits:
- Hands-on experience with cutting-edge technologies.
- Opportunity to work on exciting AI and LLM projects
DevOps Lead Engineer
We are seeking a skilled DevOps Lead Engineer with 8 to 10 yrs. of experience who handles the entire DevOps lifecycle and is accountable for the implementation of the process. A DevOps Lead Engineer is liable for automating all the manual tasks for developing and deploying code and data to implement continuous deployment and continuous integration frameworks. They are also held responsible for maintaining high availability of production and non-production work environments.
Essential Requirements (must have):
• Bachelor's degree preferable in Engineering.
• Solid 5+ experience with AWS, DevOps, and related technologies
Skills Required:
Cloud Performance Engineering
• Performance scaling in a Micro-Services environment
• Horizontal scaling architecture
• Containerization (such as Dockers) & Deployment
• Container Orchestration (such as Kubernetes) & Scaling
DevOps Automation
• End to end release automation.
• Solid Experience in DevOps tools like GIT, Jenkins, Docker, Kubernetes, Terraform, Ansible, CFN etc.
• Solid experience in Infra Automation (Infrastructure as Code), Deployment, and Implementation.
• Candidates must possess experience in using Linux, Jenkins, and ample experience in Configuring and automating the monitoring tools.
• Strong scripting knowledge
• Strong analytical and problem-solving skills.
• Cloud and On-prem deployments
Infrastructure Design & Provisioning
• Infra provisioning.
• Infrastructure Sizing
• Infra Cost Optimization
• Infra security
• Infra monitoring & site reliability.
Job Responsibilities:
• Responsible for creating software deployment strategies that are essential for the successful
deployment of software in the work environment and provide stable environment for delivery of
quality.
• The DevOps Lead Engineer is accountable for designing, building, configuring, and optimizing
automation systems that help to execute business web and data infrastructure platforms.
• The DevOps Lead Engineer is involved in creating technology infrastructure, automation tools,
and maintaining configuration management.
• The Lead DevOps Engineer oversees and leads the activities of the DevOps team. They are
accountable for conducting training sessions for the juniors in the team, mentoring, career
support. They are also answerable for the architecture and technical leadership of the complete
DevOps infrastructure.
Job Role - DevOps Infra Lead Engineer
About LenDenClub
LenDenClub is a leading peer-to-peer lending platform that provides an alternate investment opportunity to investors or lenders looking for high returns with creditworthy borrowers looking for short-term personal loans. With a total of 8 million users and 2 million+ investors on board, LenDenClub has become a go-to platform to earn returns in the range of 10%-12%. LenDenClub offers investors a convenient medium to browse thousands of borrower profiles to achieve better returns than traditional asset classes. Moreover, LenDenClub is safeguarded by
market volatility and inflation. LenDenClub provides a great way to diversify one’s investment portfolio.
LenDenClub has raised US $10 million in a Series A round from an association of investors. With the new round of funding, LenDenClub was valued at more than US $51 million in the last round and has grown multifold since then.
Why work at LenDenClub
LenDenClub is a certified great place to work. The certification comes from the Great Place to Work Institute, Inc., a globally renowned firm dedicated to evaluating companies for their employee satisfaction on the grounds of high trust and high-performance culture at workplaces.
As a LenDenite, you will be a part of an enthusiastic and passionate group of individuals who own and love what they do. At LenDenClub we believe in creating leaders and with you coming on board you get to work with complete freedom to chase your ultimate career goal without any inhibitions.
Website - https://www.lendenclub.com
Location - Mumbai (Goregaon)
Responsibilities of a DevOps Infra Lead Engineer:
● Responsible for creating software deployment strategies that are essential for the successful deployment of software in the work environment. Identify and implement data storage methods like clustering to improve the performance of the team.
● Responsible for coming up with solutions for managing a vast number of documents in real-time and enables quick search and analysis. Identifies issues in the production phase and system and implements monitoring solutions to overcome those issues.
● Stay abreast of industry trends and best practices. Conduct research, tests, and execute new techniques which could be reused and applied to the software development project.
● Accountable for designing, building, and optimizing automation systems that help to execute business web and data infrastructure platforms.
● Creating technology infrastructure, automation tools, and maintaining configuration management.
● To cater to the engineering department’s quality and standards, implement lifecycle infrastructure solutions and documentation operations.
● Implementation and maintaining of CI/CD pipelines.
● Containerisation of applications
● Construct and improve the security on the infrastructure
● Infrastructure As A Code
● Maintaining Environments
● Nat and ACL's
● Setup of ECS and ELB for HA
● WAF and Firewall and DMZ
● Deployment strategies for high uptime
● Setup up monitoring and policies for infra and applications
Required Skills
● Communication Skills
● Interpersonal Skills
● Infrastructure
● Aware of technologies like Python, MYSQL, MongoDB, and so on.
● Sound knowledge of cloud infrastructure.
● Should possess knowledge of fundamental Unix/Linux, monitoring, editing, and command-based tools is essential.
● Versed in scripting languages such as Ruby and Shell
● Google Cloud Platforms, Hadoop, NoSQL databases, and big data clusters.
● Knowledge of open source technologies
Company - Apptware Solutions
Location Baner Pune
Team Size - 130+
Job Description -
Cloud Engineer with 8+yrs of experience
Roles and Responsibilities
● Have 8+ years of strong experience in deployment, management and maintenance of large systems on-premise or cloud
● Experience maintaining and deploying highly-available, fault-tolerant systems at scale
● A drive towards automating repetitive tasks (e.g. scripting via Bash, Python, Ruby, etc)
● Practical experience with Docker containerization and clustering (Kubernetes/ECS)
● Expertise with AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda, VPN)
● Version control system experience (e.g. Git)
● Experience implementing CI/CD (e.g. Jenkins, TravisCI, CodePipeline)
● Operational (e.g. HA/Backups) NoSQL experience (e.g. MongoDB, Redis) SQL experience (e.g. MySQL)
● Experience with configuration management tools (e.g. Ansible, Chef) ● Experience with infrastructure-as-code (e.g. Terraform, Cloudformation)
● Bachelor's or master’s degree in CS, or equivalent practical experience
● Effective communication skills
● Hands-on cloud providers like MS Azure and GC
● A sense of ownership and ability to operate independently
● Experience with Jira and one or more Agile SDLC methodologies
● Nice to Have:
○ Sensu and Graphite
○ Ruby or Java
○ Python or Groovy
○ Java Performance Analysis
Role: Cloud Engineer
Industry Type: IT-Software, Software Services
Functional Area: IT Software - Application Programming, Maintenance Employment Type: Full Time, Permanent
Role Category: Programming & Design
About HighLevel
HighLevel is an all-in-one, white-label marketing platform for agencies & consultants. Our goal as a business is to create a sustainable, powerful, “all things marketing” operating system that creates limitless opportunities for our customers. With over 20,000 customers, we need people like YOU to help us grow and scale even further in the coming years. We currently have 600 employees worldwide, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and, above all, encourage a healthy work-life balance for our employees wherever they call home.
Our Website: https://www.gohighlevel.com/
YouTube Channel: https://www.youtube.com/channel/UCXFiV4qDX5ipEDQcsm1j4g
Role & Responsibilities:
We are seeking a highly skilled Full Stack Developer to join our CRM team. The ideal candidate will have a strong background in Node.js and Vue.js and possess hands-on experience in various technologies and concepts.
- Collaborate with cross-functional teams to design, develop, and maintain CRM applications and features
- Responsible for implementing visual elements that users see and interact with in a web application
- Build and optimize user interfaces using Vue.js for an exceptional user experience
- Develop server-side logic and APIs using Node.js
- Implement robust data storage and retrieval solutions with a focus on ElasticSearch, Data Indexing, Database Sharding, and Autoscaling
- Integrate Message Queues, Pub-sub systems, and Event-Based architectures to enable real-time data processing and event-driven workflows
- Handle real-time data migration and event processing tasks efficiently
- Utilize messaging systems such as Active MQ, Rabbit MQ, and Kafka to manage data flow and communication within the CRM ecosystem
- Collaborate closely with front-end and back-end developers, product managers, and data engineers to deliver high-quality solutions
- Optimize applications for maximum speed and scalability
- Ensure the security and integrity of data and application systems
- Troubleshoot and resolve technical issues, bugs, and performance bottlenecks
- Stay updated with emerging technologies and industry trends, and make recommendations for adoption when appropriate
- Participate in code reviews, maintain documentation, and contribute to a culture of continuous improvement
- Provide technical support and mentorship to junior developers when necessary
Qualifications:
- Good hands-on experience with Node.Js and Vue.js (or React/Angular)
- Strong understanding of ElasticSearch, Data Indexing, Database Sharding, and Auto Scaling techniques.
- Experience working with Message Queues, Pub-sub patterns, and Event-Based architecture
- Proficiency in Real-time Data Migration and Real-time Event Processing
- Familiarity with messaging systems like Active MQ, Rabbit MQ, and Kafka
- Expertise with MongoDB
- Proficient understanding of code versioning tools, such as Git
- Strong communication and problem-solving skills
- Practical experience in mentoring engineers and doing code reviews
What to Expect when you Apply?
- Exploratory Call
- Technical Round I/II
- Assignment
- Cultural Fitment Round
Perks and Benefits:
- Impact - Work with scale, our infrastructure handles around 5 Billion+ API hits, 2 Billion+ message events, and more than 40 TeraBytes of data
- Learning - Work with a team of A-players distributed across 15 countries who move fast (we have built one of the widest products on the market in under 3 years)
- Unlimited Leave Policy & remote first culture
- 1 team offsite every year
EEO Statement:
At HighLevel, we value diversity. In fact, we understand it makes our organization stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and
whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way!
Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.