50+ Google Cloud Platform (GCP) Jobs in India
Apply to 50+ Google Cloud Platform (GCP) Jobs on CutShort.io. Find your next job, effortlessly. Browse Google Cloud Platform (GCP) Jobs and apply today!
CoinCROWD is a cutting-edge platform in the digital finance space, focused on delivering innovative solutions that empower individuals and businesses in the cryptocurrency ecosystem. We are passionate about creating seamless, secure, and scalable solutions to simplify the way people interact with digital currencies. As we continue to grow, we're looking for skilled backend developers to join our dynamic engineering team.
Position overview:
We're seeking a detail-oriented and proactive DevOps Engineer who has a strong background in Google Cloud Platform (GCP) environments. The ideal candidate will be comfortable operating in a fast-paced, dynamic startup environment, where they will have the opportunity to make substantial contributions.
Key Responsibilities :
- Develop, test, and maintain infrastructure on GCP.
- Automate infrastructure, application deployment, scaling, and management using Kubernetes and other similar tools.
- Collaborate with our software development team to ensure seamless deployment of software updates and enhancements.
- Monitor system performance and troubleshoot issues.
- Ensure high levels of performance, availability, sustainability, and security.
- Implement DevOps best practices, such as IAC (Infrastructure as Code).
Qualifications :
- Proven experience as a DevOps Engineer or similar role in software development and system administration.
- Strong experience with GCP (Google Cloud Platform), including Compute Engine, Cloud Functions, Cloud Storage, and other relevant GCP services.
- Knowledge of Kubernetes, Docker, Jenkins, or similar technologies.
- Familiarity with network protocols, firewalls, and VPN.
- Experience with scripting languages such as Python, Bash, etc.
- Understanding of Infrastructure as Code (IAC) tools, like Terraform or CloudFormation.
- Excellent problem-solving skills, attention to detail, and ability to work in a team.
What We Offer :
In recognition of your valuable contributions, you will receive an equity-based compensation package. Join our dynamic and innovative team in the rapidly evolving fintech industry and play a key role in shaping the future of CoinCROWD's success.
If you're ready to be at the forefront of the Payment Technology revolution and have the vision and experience to drive sales growth in the crypto space, please join us in our mission to redefine fintech at CoinCROWD.
Who are we?
Eliminate Fraud. Establish Trust.
IDfy is an Integrated Identity Platform offering products and solutions for KYC, KYB, Background Verifications, Risk Mitigation, Digital Onboarding, and Digital Privacy. We establish trust while delivering a frictionless experience for you, your employees, customers, and partners.
Only IDfy combines enterprise-grade technology with business understanding and has the widest breadth of offerings in the industry. With more than 12+ years of experience and 2 million verifications per day, we are pioneers in this industry.
Our clients include HDFC Bank, Induslnd Bank, Zomato, Amazon, PhonePe, Paytm, HUL, and many others. We have successfully raised $27M from Elev8 Venture Partners, KB Investment, and Tenacity Ventures!
About Team
The machine learning team at IDfy is self-contained and responsible for building models and services that support key workflows. Our models serve as gating criteria for these workflows and are expected to perform accurately and quickly. We use a mix of conventional and hand-crafted deep learning models.
The team comes from diverse backgrounds and experiences and works directly with business and product teams to craft solutions for our customers. We function as a platform, not a services company.
We are the match if you...
- Are a mid-career machine learning engineer (or data scientist) with 4-8 years of experience in data science.
Must-Haves
- Experience in framing and solving problems with machine learning or deep learning models.
- Expertise in either computer vision or natural language processing (NLP), with appropriate production system experience.
- Experience with generative AI, including fine-tuning models and utilizing Retrieval-Augmented Generation (RAG).
- Understanding that modeling is just a part of building and delivering AI solutions, with knowledge of what it takes to keep a high-performance system up and running.
- Proficiency in Python and experience with frameworks like PyTorch, TensorFlow, JAX, HuggingFace, Spacy, etc.
- Enthusiasm and drive to learn and apply state-of-the-art research.
- Ability to write APIs.
- Experience with AI systems and at least one cloud provider: AWS Sagemaker, GCP VertexAI, AzureML.
Good to Haves
- Familiarity with languages like Go, Elixir, or an interest in functional programming.
- Knowledge and experience in MLOps and tooling, particularly Docker and Kubernetes.
- Experience with other platforms, frameworks, and tools.
Here’s what your day would look like...
In this role, you will:
- Work on all aspects of a production machine learning system, including acquiring data, training and building models, deploying models, building API services for exposing these models, and maintaining them in production.
- Focus on performance tuning of models.
- Occasionally support and debug production systems.
- Research and apply the latest technology to build new products and enhance the existing platform.
- Build workflows for training and production systems.
- Contribute to documentation.
While the emphasis will be on researching, building, and deploying models into production, you will also contribute to other aspects of the project.
Why Join Us?
- Innovative, Impactful Projects: Work on cutting-edge AI projects that push the boundaries of technology and positively impact billions of people.
- Collaborative Environment: Join a passionate and talented team dedicated to innovation and excellence. Be part of a diverse and inclusive workplace where your ideas and contributions are valued.
- Mentorship Opportunities: Mentor interns and junior team members, with support and coaching to help you develop leadership skills.
Excited already?
At IDfy, you will work on the entire end-to-end solution rather than just a small part of a larger process. Our culture thrives on collaboration, innovation, and impact.
Please Apply - https://zrec.in/RZ7zE?source=CareerSite
About Us
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Job Title: DevOps Engineer GCP
Department: Technology
Location: Gurgaon
Work Mode: On-site
Working Hours: 10 AM - 7 PM
Terms: Permanent
Experience: 2-4 years
Education: B.Tech/MCA/BCA
Notice Period: Immediately
Infra360.io is searching for a DevOps Engineer to lead our group of IT specialists in maintaining and improving our software infrastructure. You'll collaborate with software engineers, QA engineers, and other IT pros in deploying, automating, and managing the software infrastructure. As a DevOps engineer you will also be responsible for setting up CI/CD pipelines, monitoring programs, and cloud infrastructure.
Below is a detailed description of the roles and responsibilities, expectations for the role.
Tech Stack :
- Kubernetes: Deep understanding of Kubernetes clusters, container orchestration, and its architecture.
- Terraform: Extensive hands-on experience with Infrastructure as Code (IaC) using Terraform for managing cloud resources.
- ArgoCD: Experience in continuous deployment and using ArgoCD to maintain GitOps workflows.
- Helm: Expertise in Helm for managing Kubernetes applications.
- Cloud Platforms: Expertise in GCP, AWS or Azure will be an added advantage.
- Debugging and Troubleshooting: The DevOps Engineer must be proficient in identifying and resolving complex issues in a distributed environment, ranging from networking issues to misconfigurations in infrastructure or application components.
Key Responsibilities:
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
Ideal Candidate Profile:
- A graduation/post-graduation degree in Computer Science and related fields
- 2-4 years of strong DevOps experience with the Linux environment.
- Strong interest in working in our tech stack
- Excellent communication skills
- Worked with minimal supervision and love to work as a self-starter
- Hands-on experience with at least one of the scripting languages - Bash, Python, Go etc
- Experience with version control systems like Git
- Strong experience of GCP.
- Strong experience with managing the Production Systems day in and day out
- Experience in finding issues in different layers of architecture in production environment and fixing them
- Knowledge of SQL and NoSQL databases, ElasticSearch, Solr etc.
- Knowledge of Networking, Firewalls, load balancers, Nginx, Apache etc.
- Experience in automation tools like Ansible/SaltStack and Jenkins
- Experience in Docker/Kubernetes platform and managing OpenStack (desirable)
- Experience with Hashicorp tools i.e. Vault, Vagrant, Terraform, Consul, VirtualBox etc. (desirable)
- Experience with managing/mentoring small team of 2-3 people (desirable)
- Experience in Monitoring tools like Prometheus/Grafana/Elastic APM.
- Experience in logging tools Like ELK/Loki.
Please Apply - https://zrec.in/sEzbp?source=CareerSite
About Us
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Job Title: DevOps Engineer Azure
Department: Technology
Location: Gurgaon
Work Mode: On-site
Working Hours: 10 AM - 7 PM
Terms: Permanent
Experience: 2-4 years
Education: B.Tech/MCA/BCA
Notice Period: Immediately
Infra360.io is searching for a DevOps Engineer to lead our group of IT specialists in maintaining and improving our software infrastructure. You'll collaborate with software engineers, QA engineers, and other IT pros in deploying, automating, and managing the software infrastructure. As a DevOps engineer you will also be responsible for setting up CI/CD pipelines, monitoring programs, and cloud infrastructure.
Below is a detailed description of the roles and responsibilities, expectations for the role.
Tech Stack :
- Kubernetes: Deep understanding of Kubernetes clusters, container orchestration, and its architecture.
- Terraform: Extensive hands-on experience with Infrastructure as Code (IaC) using Terraform for managing cloud resources.
- ArgoCD: Experience in continuous deployment and using ArgoCD to maintain GitOps workflows.
- Helm: Expertise in Helm for managing Kubernetes applications.
- Cloud Platforms: Expertise in AWS, GCP or Azure will be an added advantage.
- Debugging and Troubleshooting: The DevOps Engineer must be proficient in identifying and resolving complex issues in a distributed environment, ranging from networking issues to misconfigurations in infrastructure or application components.
Key Responsibilities:
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
Ideal Candidate Profile:
- A graduation/post-graduation degree in Computer Science and related fields
- 2-4 years of strong DevOps experience with the Linux environment.
- Strong interest in working in our tech stack
- Excellent communication skills
- Worked with minimal supervision and love to work as a self-starter
- Hands-on experience with at least one of the scripting languages - Bash, Python, Go etc
- Experience with version control systems like Git
- Strong experience of Amazon Web Services (EC2, RDS, VPC, S3, Route53, IAM etc.)
- Strong experience with managing the Production Systems day in and day out
- Experience in finding issues in different layers of architecture in production environment and fixing them
- Knowledge of SQL and NoSQL databases, ElasticSearch, Solr etc.
- Knowledge of Networking, Firewalls, load balancers, Nginx, Apache etc.
- Experience in automation tools like Ansible/SaltStack and Jenkins
- Experience in Docker/Kubernetes platform and managing OpenStack (desirable)
- Experience with Hashicorp tools i.e. Vault, Vagrant, Terraform, Consul, VirtualBox etc. (desirable)
- Experience with managing/mentoring small team of 2-3 people (desirable)
- Experience in Monitoring tools like Prometheus/Grafana/Elastic APM.
- Experience in logging tools Like ELK/Lo
Please Apply - https://zrec.in/L51Qf?source=CareerSite
About Us
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Job Title: Senior DevOps Engineer (Infrastructure/SRE)
Department: Technology
Location: Gurgaon
Work Mode: On-site
Working Hours: 10 AM - 7 PM
Terms: Permanent
Experience: 4-6 years
Education: B.Tech/MCA
Notice Period: Immediately
About Us
At Infra360.io, we are a next-generation cloud consulting and services company committed to delivering comprehensive, 360-degree solutions for cloud, infrastructure, DevOps, and security. We partner with clients to transform and optimize their technology landscape, ensuring resilience, scalability, cost efficiency and innovation.
Our core services include Cloud Strategy, Site Reliability Engineering (SRE), DevOps, Cloud Security Posture Management (CSPM), and related Managed Services. We specialize in driving operational excellence across multi-cloud environments, helping businesses achieve their goals with agility and reliability.
We thrive on ownership, collaboration, problem-solving, and excellence, fostering an environment where innovation and continuous learning are at the forefront. Join us as we expand and redefine what’s possible in cloud technology and infrastructure.
Role Summary
We are looking for a Senior DevOps Engineer (Infrastructure) to design, automate, and manage cloud-based and datacentre infrastructure for diverse projects. The ideal candidate will have deep expertise in a public cloud platform (AWS, GCP, or Azure), with a strong focus on cost optimization, security best practices, and infrastructure automation using tools like Terraform and CI/CD pipelines.
This role involves designing scalable architectures (containers, serverless, and VMs), managing databases, and ensuring system observability with tools like Prometheus and Grafana. Strong leadership, client communication, and team mentoring skills are essential. Experience with VPN technologies and configuration management tools (Ansible, Helm) is also critical. Multi-cloud experience and familiarity with APM tools are a plus.
Ideal Candidate Profile
- Solid 4-6 years of experience as a DevOps engineer with a proven track record of architecting and automating solutions on Cloud
- Experience in troubleshooting production incidents and handling high-pressure situations.
- Strong leadership skills and the ability to mentor team members and provide guidance on best practices.
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- Extensive experience with Kubernetes, Terraform, ArgoCD, and Helm.
- Strong with at least one public cloud AWS/GCP/Azure
- Strong with Cost Optimization and Security Best practices
- Strong with Infrastructure automation using Terraform and CI/CD automation
- Strong with Configuration Management using Ansible, Helm etc
- Good with designing architectures (Containers, Serverless, VMs etc)
- Hands-on Experience working on Multiple Projects
- Strong with Client communication and requirements gathering
- Databases management experience
- Good experience with Prometheus, Grafana & Alert Manager
- Able to manage multiple clients and take ownership of client issues.
- Experience with Git and coding best practices
- Proficiency in cloud networking, including VPCs, DNS, VPNs (OpenVPN, OpenSwan, Pritunl, Site-to-Site VPNs), load balancers, and firewalls, ensuring secure and efficient connectivity.
- Strong understanding of cloud security best practices, identity and access management (IAM), and compliance requirements for modern infrastructure.
Good to have
- Multi-cloud experience with AWS, GCP & Azure
- Experience with APM & Observability tools like - Newrelic, Datadog, and OpenTelemetry
- Proficiency in scripting languages (Python, Go) for automation and tooling to improve infrastructure and application reliability.
Key Responsibilities
- Design and Development:
- Architect, design, and develop high-quality, scalable, and secure cloud-based software solutions.
- Collaborate with product and engineering teams to translate business requirements into technical specifications.
- Write clean, maintainable, and efficient code, following best practices and coding standards.
- Cloud Infrastructure:
- Develop and optimise cloud-native applications, leveraging cloud services like AWS, Azure, or Google Cloud Platform (GCP).
- Implement and manage CI/CD pipelines for automated deployment and testing.
- Ensure the security, reliability, and performance of cloud infrastructure.
- Technical Leadership:
- Mentor and guide junior engineers, providing technical leadership and fostering a collaborative team environment.
- Participate in code reviews, ensuring adherence to best practices and high-quality code delivery.
- Lead technical discussions and contribute to architectural decisions.
- Problem Solving and Troubleshooting:
- Identify, diagnose, and resolve complex software and infrastructure issues.
- Perform root cause analysis for production incidents and implement preventative measures.
- Continuous Improvement:
- Stay up-to-date with the latest industry trends, tools, and technologies in cloud computing and software engineering.
- Contribute to the continuous improvement of development processes, tools, and methodologies.
- Drive innovation by experimenting with new technologies and solutions to enhance the platform.
- Collaboration:
- Work closely with DevOps, QA, and other teams to ensure smooth integration and delivery of software releases.
- Communicate effectively with stakeholders, including technical and non-technical team members.
- Client Interaction & Management:
- Will serve as a direct point of contact for multiple clients.
- Able to handle the unique technical needs and challenges of two or more clients concurrently.
- Involve both direct interaction with clients and internal team coordination.
- Production Systems Management:
- Must have extensive experience in managing, monitoring, and debugging production environments.
- Will work on troubleshooting complex issues and ensure that production systems are running smoothly with minimal downtime.
Please Apply - https://zrec.in/FQVXt?source=CareerSite
About Us
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Job Title: Associate DevOps Engineer
Department: Technology
Location: Gurgaon
Work Mode: On-site
Working Hours: 10 AM - 7 PM
Terms: Permanent
Experience: 6 Months - 24 Months
Education: B.Tech/MCA/BCA
Notice Period: Immediately
Infra360.io is searching for an Associate DevOps Engineer to lead our group of IT specialists in maintaining and improving our software infrastructure. You'll collaborate with software engineers, QA engineers, and other IT pros in deploying, automating, and managing the software infrastructure. As an Associate DevOps engineer you will also be responsible for setting up CI/CD pipelines, monitoring programs, and cloud infrastructure.
Below is a detailed description of the roles and responsibilities, expectations for the role.
Tech Stack :
- Kubernetes: Deep understanding of Kubernetes clusters, container orchestration, and its architecture.
- Terraform: Extensive hands-on experience with Infrastructure as Code (IaC) using Terraform for managing cloud resources.
- ArgoCD: Experience in continuous deployment and using ArgoCD to maintain GitOps workflows.
- Helm: Expertise in Helm for managing Kubernetes applications.
- Cloud Platforms: Expertise in AWS, GCP or Azure will be an added advantage.
- Debugging and Troubleshooting: The Associate DevOps Engineer must be proficient in identifying and resolving complex issues in a distributed environment, ranging from networking issues to misconfigurations in infrastructure or application components.
Key Responsibilities:
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
Ideal Candidate Profile:
- A graduation/post-graduation degree in Computer Science and related fields
- 1-2 years of strong DevOps experience with the Linux environment.
- Strong interest in working in our tech stack
- Excellent communication skills
- Worked with minimal supervision and love to work as a self-starter
- Hands-on experience with at least one of the scripting languages - Bash, Python, Go etc
- Experience with version control systems like Git
- Strong experience of Amazon Web Services (EC2, RDS, VPC, S3, Route53, IAM etc.)
- Strong experience with managing the Production Systems day in and day out
- Experience in finding issues in different layers of architecture in production environment and fixing them
- Knowledge of SQL and NoSQL databases, ElasticSearch, Solr etc.
- Knowledge of Networking, Firewalls, load balancers, Nginx, Apache etc.
- Experience in automation tools like Ansible/SaltStack and Jenkins
- Experience in Docker/Kubernetes platform and managing OpenStack (desirable)
- Experience with Hashicorp tools i.e. Vault, Vagrant, Terraform, Consul, VirtualBox etc. (desirable)
- Experience with managing/mentoring small team of 2-3 people (desirable)
- Experience in Monitoring tools like Prometheus/Grafana/Elastic APM.
- Experience in logging tools Like ELK/Loki.
Please Apply - https://zrec.in/GzLLD?source=CareerSite
About Us
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Job Title: DevOps Engineer AWS
Department: Technology
Location: Gurgaon
Work Mode: On-site
Working Hours: 10 AM - 7 PM
Terms: Permanent
Experience: 2-4 years
Education: B.Tech/MCA/BCA
Notice Period: Immediately
Infra360.io is searching for a DevOps Engineer to lead our group of IT specialists in maintaining and improving our software infrastructure. You'll collaborate with software engineers, QA engineers, and other IT pros in deploying, automating, and managing the software infrastructure. As a DevOps engineer you will also be responsible for setting up CI/CD pipelines, monitoring programs, and cloud infrastructure.
Below is a detailed description of the roles and responsibilities, expectations for the role.
Tech Stack :
- Kubernetes: Deep understanding of Kubernetes clusters, container orchestration, and its architecture.
- Terraform: Extensive hands-on experience with Infrastructure as Code (IaC) using Terraform for managing cloud resources.
- ArgoCD: Experience in continuous deployment and using ArgoCD to maintain GitOps workflows.
- Helm: Expertise in Helm for managing Kubernetes applications.
- Cloud Platforms: Expertise in AWS, GCP or Azure will be an added advantage.
- Debugging and Troubleshooting: The DevOps Engineer must be proficient in identifying and resolving complex issues in a distributed environment, ranging from networking issues to misconfigurations in infrastructure or application components.
Key Responsibilities:
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
Ideal Candidate Profile:
- A graduation/post-graduation degree in Computer Science and related fields
- 2-4 years of strong DevOps experience with the Linux environment.
- Strong interest in working in our tech stack
- Excellent communication skills
- Worked with minimal supervision and love to work as a self-starter
- Hands-on experience with at least one of the scripting languages - Bash, Python, Go etc
- Experience with version control systems like Git
- Strong experience of Amazon Web Services (EC2, RDS, VPC, S3, Route53, IAM etc.)
- Strong experience with managing the Production Systems day in and day out
- Experience in finding issues in different layers of architecture in production environment and fixing them
- Knowledge of SQL and NoSQL databases, ElasticSearch, Solr etc.
- Knowledge of Networking, Firewalls, load balancers, Nginx, Apache etc.
- Experience in automation tools like Ansible/SaltStack and Jenkins
- Experience in Docker/Kubernetes platform and managing OpenStack (desirable)
- Experience with Hashicorp tools i.e. Vault, Vagrant, Terraform, Consul, VirtualBox etc. (desirable)
- Experience with managing/mentoring small team of 2-3 people (desirable)
- Experience in Monitoring tools like Prometheus/Grafana/Elastic APM.
- Experience in logging tools Like ELK/Loki.
Please Apply - https://zrec.in/7EYKe?source=CareerSite
About Us
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Job Title: Senior DevOps Engineer / SRE
Department: Technology
Location: Gurgaon
Work Mode: On-site
Working Hours: 10 AM - 7 PM
Terms: Permanent
Experience: 4-6 years
Education: B.Tech/MCA
Notice Period: Immediately
About Us
At Infra360.io, we are a next-generation cloud consulting and services company committed to delivering comprehensive, 360-degree solutions for cloud, infrastructure, DevOps, and security. We partner with clients to transform and optimize their technology landscape, ensuring resilience, scalability, cost efficiency and innovation.
Our core services include Cloud Strategy, Site Reliability Engineering (SRE), DevOps, Cloud Security Posture Management (CSPM), and related Managed Services. We specialize in driving operational excellence across multi-cloud environments, helping businesses achieve their goals with agility and reliability.
We thrive on ownership, collaboration, problem-solving, and excellence, fostering an environment where innovation and continuous learning are at the forefront. Join us as we expand and redefine what’s possible in cloud technology and infrastructure.
Role Summary
We are seeking a Senior DevOps Engineer (SRE) to manage and optimize large-scale, mission-critical production systems. The ideal candidate will have a strong problem-solving mindset, extensive experience in troubleshooting, and expertise in scaling, automating, and enhancing system reliability. This role requires hands-on proficiency in tools like Kubernetes, Terraform, CI/CD, and cloud platforms (AWS, GCP, Azure), along with scripting skills in Python or Go. The candidate will drive observability and monitoring initiatives using tools like Prometheus, Grafana, and APM solutions (Datadog, New Relic, OpenTelemetry).
Strong communication, incident management skills, and a collaborative approach are essential. Experience in team leadership and multi-client engagement is a plus.
Ideal Candidate Profile
- Solid 4-6 years of experience as an SRE and DevOps with a proven track record of handling large-scale production environments
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field
- Strong Hands-on experience with managing Large Scale Production Systems
- Strong Production Troubleshooting Skills and handling high-pressure situations.
- Strong Experience with Databases (PostgreSQL, MongoDB, ElasticSearch, Kafka)
- Worked on making production systems more Scalable, Highly Available and Fault-tolerant
- Hands-on experience with ELK or other logging and observability tools
- Hands-on experience with Prometheus, Grafana & Alertmanager and on-call processes like Pagerduty
- Problem-Solving Mindset
- Strong with skills - K8s, Terraform, Helm, ArgoCD, AWS/GCP/Azure etc
- Good with Python/Go Scripting Automation
- Strong with fundamentals like DNS, Networking, Linux
- Experience with APM tools like - Newrelic, Datadog, OpenTelemetry
- Good experience with Incident Response, Incident Management, Writing detailed RCAs
- Experience with Applications best practices in making apps more reliable and fault-tolerant
- Strong leadership skills and the ability to mentor team members and provide guidance on best practices.
- Able to manage multiple clients and take ownership of client issues.
- Experience with Git and coding best practices
Good to have
- Team-leading Experience
- Multiple Client Handling
- Requirements gathering from clients
- Good Communication
Key Responsibilities
- Design and Development:
- Architect, design, and develop high-quality, scalable, and secure cloud-based software solutions.
- Collaborate with product and engineering teams to translate business requirements into technical specifications.
- Write clean, maintainable, and efficient code, following best practices and coding standards.
- Cloud Infrastructure:
- Develop and optimise cloud-native applications, leveraging cloud services like AWS, Azure, or Google Cloud Platform (GCP).
- Implement and manage CI/CD pipelines for automated deployment and testing.
- Ensure the security, reliability, and performance of cloud infrastructure.
- Technical Leadership:
- Mentor and guide junior engineers, providing technical leadership and fostering a collaborative team environment.
- Participate in code reviews, ensuring adherence to best practices and high-quality code delivery.
- Lead technical discussions and contribute to architectural decisions.
- Problem Solving and Troubleshooting:
- Identify, diagnose, and resolve complex software and infrastructure issues.
- Perform root cause analysis for production incidents and implement preventative measures.
- Continuous Improvement:
- Stay up-to-date with the latest industry trends, tools, and technologies in cloud computing and software engineering.
- Contribute to the continuous improvement of development processes, tools, and methodologies.
- Drive innovation by experimenting with new technologies and solutions to enhance the platform.
- Collaboration:
- Work closely with DevOps, QA, and other teams to ensure smooth integration and delivery of software releases.
- Communicate effectively with stakeholders, including technical and non-technical team members.
- Client Interaction & Management:
- Will serve as a direct point of contact for multiple clients.
- Able to handle the unique technical needs and challenges of two or more clients concurrently.
- Involve both direct interaction with clients and internal team coordination.
- Production Systems Management:
- Must have extensive experience in managing, monitoring, and debugging production environments.
- Will work on troubleshooting complex issues and ensure that production systems are running smoothly with minimal downtime.
About the Role:
We are seeking a talented and passionate DevOps Engineer to join our dynamic team. You will be responsible for designing, implementing, and managing scalable and secure infrastructure across multiple cloud platforms. The ideal candidate will have a deep understanding of DevOps best practices and a proven track record in automating and optimizing complex workflows.
Key Responsibilities:
Cloud Management:
- Design, implement, and manage cloud infrastructure on AWS, Azure, and GCP.
- Ensure high availability, scalability, and security of cloud resources.
Containerization & Orchestration:
- Develop and manage containerized applications using Docker.
- Deploy, scale, and manage Kubernetes clusters.
CI/CD Pipelines:
- Build and maintain robust CI/CD pipelines to automate the software delivery process.
- Implement monitoring and alerting to ensure pipeline efficiency.
Version Control & Collaboration:
- Manage code repositories and workflows using Git.
- Collaborate with development teams to optimize branching strategies and code reviews.
Automation & Scripting:
- Automate infrastructure provisioning and configuration using tools like Terraform, Ansible, or similar.
- Write scripts to optimize and maintain workflows.
Monitoring & Logging:
- Implement and maintain monitoring solutions to ensure system health and performance.
- Analyze logs and metrics to troubleshoot and resolve issues.
Required Skills & Qualifications:
- 3-5 years of experience with AWS, Azure, and Google Cloud Platform (GCP).
- Proficiency in containerization tools like Docker and orchestration tools like Kubernetes.
- Hands-on experience building and managing CI/CD pipelines.
- Proficient in using Git for version control.
- Experience with scripting languages such as Bash, Python, or PowerShell.
- Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
- Solid understanding of networking, security, and system administration.
- Excellent problem-solving and troubleshooting skills.
- Strong communication and teamwork skills.
Preferred Qualifications:
- Certifications such as AWS Certified DevOps Engineer, Azure DevOps Engineer, or Google Professional DevOps Engineer.
- Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
- Familiarity with serverless architectures and microservices.
Job Description
Position - SRE developer / DevOps Engineer
Location - Mumbai
Experience- 3- 10 years
About HaystackAnalytics:
HaystackAnalytics is a company working in deep technology of genomics, computing and data science for creating the first of its kind clinical reporting engine in Healthcare. We are a new but well funded company with a tremendous amount of pedigree in the team (IIT Founders, IIT & IIM core team). Some of the technologies we have created are a global first in infectious disease and chronic diagnostics. As a product company creating a huge amount of IP, our Technology and R&D team are our crown jewels. With early success of our products in India, we are now expanding to take our products to international shores.
Inviting Passionate Engineers to join a new age enterprise:
At HaystackAnalytics, we rely on our dynamic team of engineers to solve the many challenges and puzzles that come with our rapidly evolving stack that deals with Healthcare and Genomics.
We’re looking for full stack engineers who are passionate problem solvers, ready to work with new technologies and architectures in a forward-thinking organization that’s always pushing boundaries. Here, you will take complete, end-to-end ownership of projects across the entire stack.
Our ideal candidate has experience building enterprise products and has understanding and experience of working with new age front end technologies, web frameworks, APIs, databases, distributed computing,back end languages, caching, security, message based architectures et al.
You’ll be joining a small team working at the forefront of new technology, solving the challenges that impact both the front end and back end architecture, and ultimately, delivering amazing global user experiences.
Objectives of this Role:
- Work across the full stack, building highly scalable distributed solutions that enable positive user experiences and measurable business growth
- Ideate and develop new product features in collaboration with domain experts in healthcare and genomics
- Develop state of the art enterprise standard front-end and backend services
- Develop cloud platform services based on container orchestration platform
- Continuously embrace automation for repetitive tasks
- Ensure application performance, uptime, and scale, maintaining high standards of code quality by using clean coding principles and solid design patterns
- Build robust tech modules that are Unit Testable, Automating recurring tasks and processes
- Engage effectively with team members and collaborate to upskill and unblock each other
Frontend Skills
- HTML 5
- CSS framework ( LESS/ SASS )
- Es6 / Typescript
- Electron app / TAURI
- Component library ( Webcomponents / radix / material )
- CSS ( tailwind)
- State management --> Redux / Zustand / Recoil
- Build tools - > (webpack/vite/Parcel/turborepo)
- Frameworks -- > Next JS /
- Design patterns
- Test Automation Frameworks (cypress playwright etc )
- Functional Programming concepts
- Scripting ( bash , python )
Backend Skills
- Node / Deno / bun - Express / NEST JS
- Language : Typescript / Python / Rust /
- REST / GRAPHQL
- SOLID Design Principles
- Storage (mongodb/ Object Storage / postgres )
- Caching ( Redis / In memory Data grid )
- Pub sub (KAFKA / SQS / SNS / Event bridge / RabbitMQ)
- Container Technology ( Docker / Kubernetes )
- Cloud ( azure , aws , openshift )
- GITOPS
- Automation ( terraform , Serverless )
Other Skills
- Innovation and thought leadership
- UI - UX design skills
- Interest in learning new tools, languages, workflows, and philosophies to grow
- Communication
A niche, specialist position in an interdisciplinary team focused on end-to-end solutions. Nature of projects range from proof-of-concept innovative applications, parallel implementations per end user requests, scaling up and continuous monitoring for improvements. Majority of the projects will be focused on providing automation solutions via both custom solutions and adapting machine learning generic standards to specific use cases/domains.
Clientele includes major publishers from the US and Europe, pharmaceutical bigwigs and government funded projects.
As a Senior Fullstack Developer, you will be responsible for designing, building, and maintaining scalable and performant web applications using modern technologies. You will work with cutting-edge tools and cloud infrastructure (primarily Google Cloud) and implement robust back-end services with React JS with Typescript, Koa.js, MongoDB, and Redis, while ensuring reliable and efficient monitoring with OpenTelemetry and logging with Bunyan. Your expertise in CI/CD pipelines and modern testing frameworks will be key to maintaining a smooth and efficient software development lifecycle.
Key Responsibilities:
- Fullstack Development: Design, develop, and maintain web applications using JavaScript (Node.js for back-end and React.js with Typescript for front-end).
- Cloud Infrastructure: Leverage Google Cloud services (like Compute Engine, Cloud Storage, Pub/Sub, etc.) to build scalable and resilient cloud solutions.
- API Development: Implement RESTful APIs and microservices with Koa.js, ensuring high performance, security, and scalability.
- Database Management: Manage MongoDB databases for storing and retrieving application data, and use Redis for caching and session management.
- Logging and Monitoring: Utilize Bunyan for structured logging and OpenTelemetry for distributed tracing and monitoring to ensure system health and performance.
- CI/CD: Design, implement, and maintain efficient CI/CD pipelines for continuous integration and deployment, ensuring fast and reliable code delivery.
- Testing & Quality Assurance: Write unit and integration tests using Jest, Mocha, and React Testing Library to ensure code reliability and maintainability.
- Collaboration: Work closely with front-end and back-end engineers to deliver high-quality software solutions, following agile development practices.
- Optimization & Scaling: Identify performance bottlenecks, troubleshoot production issues, and scale the system as needed.
- Code Reviews & Mentorship: Conduct peer code reviews, share best practices, and mentor junior developers to improve team efficiency and code quality.
Must-Have Skills:
- Google Cloud (GCP): Hands-on experience with various Google Cloud services (Compute Engine, Cloud Storage, Pub/Sub, Firestore, etc.) for building scalable applications.
- React.js: Strong experience in building modern, responsive user interfaces with React.js and Typescript
- Koa.js: Strong experience in building web servers and APIs with Koa.js.
- MongoDB & Redis: Proficiency in working with MongoDB (NoSQL databases) and Redis for caching and session management.
- Bunyan: Experience using Bunyan for structured logging and tracking application events.
- OpenTelemetry Ecosystem: Hands-on experience with the OpenTelemetry ecosystem for monitoring and distributed tracing.
- CI/CD: Proficient in setting up CI/CD pipelines using tools like CircleCI, Jenkins, or GitLab CI.
- Testing Frameworks: Solid understanding and experience with Jest, Mocha, and React Testing Library for testing both back-end and front-end applications.
- JavaScript & Node.js: Strong proficiency in JavaScript (ES6+), and experience working with Node.js for back-end services.
Desired Skills & Experience:
- Experience with other cloud platforms (AWS, Azure).
- Familiarity with containerization and orchestration tools like Docker and Kubernetes.
- Experience working with TypeScript.
- Knowledge of other logging and monitoring tools.
- Familiarity with agile methodologies and project management tools (JIRA, Trello, etc.).
Qualifications:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- 5-10 years of hands-on experience as a Fullstack Developer.
- Strong problem-solving skills and ability to debug complex systems.
- Excellent communication skills and ability to work in a team-oriented, collaborative environment.
We are looking for a seasoned DevOps Engineer with a strong background in solution architecture, ideally from the Banking or BFSI (Banking, Financial Services, and Insurance) domain. This role is crucial for implementing scalable, secure infrastructure and CI/CD practices tailored to the needs of high-compliance, high-availability environments. The ideal candidate will have deep expertise in Docker, Kubernetes, cloud platforms, and solution architecture, with knowledge of ML/AI and database management as a plus.
Key Responsibilities:
● Infrastructure & Solution Architecture: Design secure, compliant, and high-
performance cloud infrastructures (AWS, Azure, or GCP) optimized for BFSI-specific
applications.
● Containerization & Orchestration: Lead Docker and Kubernetes initiatives,
deploying applications with a focus on security, compliance, and resilience.
● CI/CD Pipelines: Build and maintain CI/CD pipelines suited to BFSI workflows,
incorporating automated testing, security checks, and rollback mechanisms.
● Cloud Infrastructure & Database Management: Manage cloud resources and
automate provisioning using Terraform, ensuring security standards. Optimize
relational and NoSQL databases for BFSI application needs.
● Monitoring & Incident Response: Implement monitoring and alerting (e.g.,
Prometheus, Grafana) for rapid incident response, ensuring uptime and reliability.
● Collaboration: Work closely with compliance, security, and development teams,
aligning infrastructure with BFSI standards and regulations.
Qualifications:
● Education: Bachelor’s or Master’s degree in Computer Science, Engineering,
Information Technology, or a related field.
● Experience: 5+ years of experience in DevOps with cloud infrastructure and solution
architecture expertise, ideally in ML/AI environments.
● Technical Skills:
○ Cloud Platforms: Proficient in AWS, Azure, or GCP; certifications (e.g., AWS
Solutions Architect, Azure Solutions Architect) are a plus.
○ Containerization & Orchestration: Expertise with Docker and Kubernetes,
including experience deploying and managing clusters at scale.
○ CI/CD Pipelines: Hands-on experience with CI/CD tools like Jenkins, GitLab
CI, or GitHub Actions, with automation and integration for ML/AI workflows
preferred.
○ Infrastructure as Code: Strong knowledge of Terraform and/or
CloudFormation for infrastructure provisioning.
○ Database Management: Proficiency in relational databases (PostgreSQL,
MySQL) and NoSQL databases (MongoDB, DynamoDB), with a focus on
optimization and scalability.
○ ML/AI Infrastructure: Experience supporting ML/AI pipelines, model serving,
and data processing within cloud or hybrid environments.
○ Monitoring and Logging: Proficient in monitoring tools like Prometheus and
Grafana, and log management solutions like ELK Stack or Splunk.
○ Scripting and Automation: Strong skills in Python, Bash, or PowerShell for
scripting and automating processes.
About Job
We are seeking an experienced Data Engineer to join our data team. As a Senior Data Engineer, you will work on various data engineering tasks including designing and optimizing data pipelines, data modelling, and troubleshooting data issues. You will collaborate with other data team members, stakeholders, and data scientists to provide data-driven insights and solutions to the organization. Experience required is of 3+ Years.
Responsibilities:
Design and optimize data pipelines for various data sources
Design and implement efficient data storage and retrieval mechanisms
Develop data modelling solutions and data validation mechanisms
Troubleshoot data-related issues and recommend process improvements
Collaborate with data scientists and stakeholders to provide data-driven insights and solutions
Coach and mentor junior data engineers in the team
Skills Required:
3+ years of experience in data engineering or related field
Strong experience in designing and optimizing data pipelines, and data modelling
Strong proficiency in programming languages Python
Experience with big data technologies like Hadoop, Spark, and Hive
Experience with cloud data services such as AWS, Azure, and GCP
Strong experience with database technologies like SQL, NoSQL, and data warehousing
Knowledge of distributed computing and storage systems
Understanding of DevOps and power automate and Microsoft Fabric will be an added advantage
Strong analytical and problem-solving skills
Excellent communication and collaboration skills
Qualifications
Bachelor's degree in Computer Science, Data Science, or a Computer related field (Master's degree preferred)
About Us
CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration
for employment without regard to race, colour, religion, gender, gender identity or expression, sexual orientation and national origin status. We are dedicated to providing equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/
What we are looking for: Backend Lead Engineer
As a Backend Lead Engineer, you will play a pivotal role in driving the technical vision and execution of our product development team. You will lead a team of talented engineers, mentor their growth, and ensure the delivery of high-quality, scalable, and maintainable software solutions.
Responsibilities:
• Technical Leadership:
o Provide technical leadership and guidance to a team backend engineers.
o Mentor and develop engineers to enhance their skills and capabilities.
o Collaborate with product managers, designers, and other stakeholders to define
product requirements and technical solutions.
• Development:
o Design, develop, and maintain robust and scalable backend applications.
o Optimize application performance and scalability for large-scale deployment
o Write clean, efficient, and well-tested code that adheres to best practices.
o Stay up-to-date with the latest technologies and trends in web development.
• Project Management:
o Lead and manage software development projects from inception to deployment.
o Estimate project timelines, assign tasks, and track progress.
o Ensure timely delivery of high-quality software.
• Problem-Solving:
o Identify and troubleshoot technical issues.
o Develop innovative solutions to complex problems.
• Architecture Design:
o Design and implement scalable and maintainable software architectures.
o Ensure the security, performance, and reliability of our systems.
Qualifications:
• Bachelor's degree in Computer Science, Engineering, or a related field.
• 6+ years of experience in backend software development.
• Proven experience leading and mentoring engineering teams.
• Strong proficiency in backend technologies (Python, Node.js, Java).
• Experience with GCP and deployment at scale.
• Familiarity with database technologies (SQL, NoSQL).
• Excellent problem-solving and analytical skills.
• Strong communication and collaboration skills.
Preferred Qualifications:
• Experience with agile development methodologies (Scrum, Kanban).
• Knowledge of DevOps practices and tools.
• Experience with microservices architecture.
• Contributions to open-source projects.
• Experience in any oil
Leverage your expertise in Python to design and implement distributed systems for web scraping, ensuring robust data extraction from diverse web sources.
• Develop and optimize scalable, self-healing scraping frameworks, integrated with AI tools for intelligent automation of the data collection process.
• Implement monitoring, logging, and alerting mechanisms to ensure high availability and performance of distributed web scraping systems.
• Work with large-scale NoSQL databases (e.g., MongoDB) to store and query scraped data efficiently.
• Collaborate with cross-functional teams to research and implement innovative AI-driven solutions for data extraction and automation.
• Ensure data integrity and security while interacting with various web sources.
Required Skills: • Extensive experience with Python and web frameworks like Flask, FastAPI, or Django.
• Experience with AI tools and machine learning libraries to enhance and automate scraping processes
. • Solid understanding of building and maintaining distributed systems, with hands-on experience in parallel programming (multithreading, asynchronous, multiprocessing).
• Working knowledge of asynchronous queue systems like Redis, Celery, RabbitMQ, etc., to handle distributed scraping tasks.
• Proven experience with web mining, scraping tools(e.g., Scrapy, BeautifulSoup, Selenium), and handling dynamic content.
• Proficiency in working with NoSQL data storage systems like MongoDB, including querying and handling large datasets.
• Knowledge of working with variousfront-end technologies and how various websites are built
Key Responsibilities:
- Cloud Infrastructure Management: Oversee the deployment, scaling, and management of cloud infrastructure across platforms like AWS, GCP, and Azure. Ensure optimal configuration, security, and cost-effectiveness.
- Application Deployment and Maintenance: Responsible for deploying and maintaining web applications, particularly those built on Django and the MERN stack (MongoDB, Express.js, React, Node.js). This includes setting up CI/CD pipelines, monitoring performance, and troubleshooting.
- Automation and Optimization: Develop scripts and automation tools to streamline operations. Continuously seek ways to improve system efficiency and reduce downtime.
- Security Compliance: Ensure that all cloud deployments comply with relevant security standards and practices. Regularly conduct security audits and coordinate with security teams to address vulnerabilities.
- Collaboration and Support: Work closely with development teams to understand their needs and provide technical support. Act as a liaison between developers, IT staff, and management to ensure smooth operation and implementation of cloud solutions.
- Disaster Recovery and Backup: Implement and manage disaster recovery plans and backup strategies to ensure data integrity and availability.
- Performance Monitoring: Regularly monitor and report on the performance of cloud services and applications. Use data to make informed decisions about upgrades, scaling, and other changes.
Required Skills and Experience:
- Proven experience in managing cloud infrastructure on AWS, GCP, and Azure.
- Strong background in deploying and maintaining Django-based and MERN stack web applications.
- Expertise in automation tools and scripting languages.
- Solid understanding of network architecture and security protocols.
- Experience with continuous integration and deployment (CI/CD) methodologies.
- Excellent problem-solving abilities and a proactive approach to system optimization.
- Good communication skills for effective collaboration with various teams.
Desired Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Relevant certifications in AWS, GCP, or Azure are highly desirable.
- Minimum 5 years of experience in a DevOps or similar role, with a focus on cloud computing and web application deployment.
About HighLevel:
HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have 1000+ employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Website - https://www.gohighlevel.com/
YouTube Channel - https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g
Blog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/
Our Customers:
HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 500K businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:
We work at scale; our infrastructure handles around 3 Billion+ API hits & 2 Billion+ message events monthly and over 25M views of customer pages daily. We also handle over 80 Terabytes of data across 5 Databases.
About the Team:
The Expansion Products team is responsible for driving volumetric & usage based upgrades and upsells within the platform to maximize revenue potential (apart from the subscription revenue). We do this by building innovative products & features that solve real-world problems for agencies and allow them to consolidate their offering to their clients in a single platform packaged under their white-labled brand. The expansion products team focuses exclusively on products that can demonstrate adoption, drive up engagement in target segments and are easily monetizable. This team handles multiple product areas including Phone System, email system, online listing integration, WordPress Hosting, Memberships & Courses, Mobile Apps, etc.
About the Role:
We’re looking for a skilled Senior Software Engineer for Membership Platform and help us take our platform’s infrastructure to the next level. In this role, you'll focus on keeping our databases fast and reliable, improving and managing the infrastructure, and reducing technical debt so we can scale smoothly as we grow. You’ll play a key part in ensuring our platform is stable, secure, and easy for our product teams to work with. This is an exciting opportunity to work on large-scale systems and make a direct impact on the experience of millions of users.
Responsibilities:
- Optimize and manage scalable databases to ensure high performance and reliability.
- Automate and maintain infrastructure using IaC tools, CI/CD pipelines, and best security practices.
- Identify, prioritize, and address technical debt to improve performance and maintainability.
- Implement monitoring and observability solutions to support high availability and incident response.
- Collaborate with cross-functional teams and document processes, mentoring engineers and sharing knowledge.
Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
- 4+ years in platform engineering, with expertise in large-scale databases and infrastructure.
- Experience in Full stack engineering with Node.js and modern Javascript frameworks like Vue.js[preferred], React.js, Angular.
- Strong background in cloud platforms (AWS, GCP, or Azure)
- Proficient in building scalable applications and should be comfortable understanding the flow of the software
- Experience with relational/non-relational databases ex: MySQL / MongoDB / Firestore
- Experience with monitoring tools (e.g., Prometheus, Grafana) and containerization (Docker, Kubernetes a plus) and video streaming knowledge is a plus.
Client Located in Bangalore Location
Job Description-
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Job Title: Solution Architect ML, Cloud
Experience:5-10 years
Client Location: Bangalore
Work Location: Tokyo, Japan (Onsite)
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Job Title: Solution Architect (ML, Cloud)
Location: Tokyo, Japan (Onsite)
Experience: 5-10 years
Overview: We are looking for a skilled Solution Architect with expertise in Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes to join our team in Tokyo. The ideal candidate will be responsible for designing and implementing cutting-edge, scalable solutions while leveraging the latest technologies and best practices to meet business objectives.
Key Responsibilities:
Collaborate with stakeholders to understand business needs and develop scalable, efficient technical solutions.
Architect and implement complex systems integrating Machine Learning, Cloud platforms (AWS, Azure, Google Cloud), and Full Stack Development.
Lead the development and deployment of cloud-native applications using NoSQL databases, Python, and Kubernetes.
Design and optimize algorithms to improve performance, scalability, and reliability of solutions.
Review, validate, and refine architecture to ensure flexibility, scalability, and cost-efficiency.
Mentor development teams and ensure adherence to best practices for coding, testing, and deployment.
Contribute to the development of technical documentation and solution roadmaps.
Stay up-to-date with emerging technologies and continuously improve solution design processes.
Required Skills & Qualifications:
5-10 years of experience as a Solution Architect or similar role with expertise in ML, Cloud, and Full Stack Development.
Proficiency in at least two major cloud platforms (AWS, Azure, Google Cloud).
Solid experience with Kubernetes for container orchestration and deployment.
Hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB).
Expertise in Python and ML frameworks like TensorFlow, PyTorch, etc.
Practical experience implementing at least two real-world algorithms (e.g., classification, clustering, recommendation systems).
Strong knowledge of scalable architecture design and cloud-native application development.
Familiarity with CI/CD tools and DevOps practices.
Excellent problem-solving abilities and the ability to thrive in a fast-paced environment.
Strong communication and collaboration skills with cross-functional teams.
Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Qualifications:
Experience with microservices and containerization.
Knowledge of distributed systems and high-performance computing.
Cloud certifications (AWS Certified Solutions Architect, Google Cloud Professional Architect, etc.).
Familiarity with Agile methodologies and Scrum.
Japanese language proficiency is an added advantage (but not mandatory).
Skills : ML, Cloud (any two major clouds), algorithms (two algorithms must be implemented), full stack, kubernatics, no sql, Python
Responsibilities:
- Collaborate with stakeholders to understand business needs and translate them into scalable and efficient technical solutions.
- Design and implement complex systems involving Machine Learning, Cloud Computing (at least two major clouds such as AWS, Azure, or Google Cloud), and Full Stack Development.
- Lead the design, development, and deployment of cloud-native applications with a focus on NoSQL databases, Python, and Kubernetes.
- Implement algorithms and provide scalable solutions, with a focus on performance optimization and system reliability.
- Review, validate, and improve architectures to ensure high scalability, flexibility, and cost-efficiency in cloud environments.
- Guide and mentor development teams, ensuring best practices are followed in coding, testing, and deployment.
- Contribute to the development of technical documentation and roadmaps.
- Stay up-to-date with emerging technologies and propose enhancements to the solution design process.
Key Skills & Requirements:
- Proven experience 5-10 years) as a Solution Architect or similar role, with deep expertise in Machine Learning, Cloud Architecture, and Full Stack Development.
- Expertise in at least two major cloud platforms (AWS, Azure, Google Cloud).
- Solid experience with Kubernetes for container orchestration and deployment.
- Strong hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB, etc.).
- Proficiency in Python, including experience with ML frameworks (such as TensorFlow, PyTorch, etc.) and libraries for algorithm development.
- Must have implemented at least two algorithms (e.g., classification, clustering, recommendation systems, etc.) in real-world applications.
- Strong experience in designing scalable architectures and applications from the ground up.
- Experience with DevOps and automation tools for CI/CD pipelines.
- Excellent problem-solving skills and ability to work in a fast-paced environment.
- Strong communication skills and ability to collaborate with cross-functional teams.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Skills:
- Experience with microservices architecture and containerization.
- Knowledge of distributed systems and high-performance computing.
- Certifications in cloud platforms (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.).
- Familiarity with Agile methodologies and Scrum.
- Knowing Japanese Language is an additional advantage for the Candidate. Not mandatory.
The candidate should have a background in development/programming with experience in at least one of the following: .NET, Java (Spring Boot), ReactJS, or AngularJS.
Primary Skills:
- AWS or GCP Cloud
- DevOps CI/CD pipelines (e.g., Azure DevOps, Jenkins)
- Python/Bash/PowerShell scripting
Secondary Skills:
- Docker or Kubernetes
Client based at Bangalore location.
Job Title: Solution Architect
Work Location: Tokyo
Experience: 7-10 years
Number of Positions: 3
Job Description:
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Responsibilities:
- Collaborate with stakeholders to understand business needs and translate them into scalable and efficient technical solutions.
- Design and implement complex systems involving Machine Learning, Cloud Computing (at least two major clouds such as AWS, Azure, or Google Cloud), and Full Stack Development.
- Lead the design, development, and deployment of cloud-native applications with a focus on NoSQL databases, Python, and Kubernetes.
- Implement algorithms and provide scalable solutions, with a focus on performance optimization and system reliability.
- Review, validate, and improve architectures to ensure high scalability, flexibility, and cost-efficiency in cloud environments.
- Guide and mentor development teams, ensuring best practices are followed in coding, testing, and deployment.
- Contribute to the development of technical documentation and roadmaps.
- Stay up-to-date with emerging technologies and propose enhancements to the solution design process.
Key Skills & Requirements:
- Proven experience (7-10 years) as a Solution Architect or similar role, with deep expertise in Machine Learning, Cloud Architecture, and Full Stack Development.
- Expertise in at least two major cloud platforms (AWS, Azure, Google Cloud).
- Solid experience with Kubernetes for container orchestration and deployment.
- Strong hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB, etc.).
- Proficiency in Python, including experience with ML frameworks (such as TensorFlow, PyTorch, etc.) and libraries for algorithm development.
- Must have implemented at least two algorithms (e.g., classification, clustering, recommendation systems, etc.) in real-world applications.
- Strong experience in designing scalable architectures and applications from the ground up.
- Experience with DevOps and automation tools for CI/CD pipelines.
- Excellent problem-solving skills and ability to work in a fast-paced environment.
- Strong communication skills and ability to collaborate with cross-functional teams.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Skills:
- Experience with microservices architecture and containerization.
- Knowledge of distributed systems and high-performance computing.
- Certifications in cloud platforms (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.).
- Familiarity with Agile methodologies and Scrum.
- Knowing Japanese Language is an additional advantage for the Candidate. Not mandatory.
About the Company-
AdPushup is an award-winning ad revenue optimization platform and Google Certified Publishing Partner (GCPP), helping hundreds of web publishers grow their revenue using cutting-edge technology, premium demand partnerships, and proven ad ops expertise.
Our team is a mix of engineers, marketers, product evangelists, and customer success specialists, united by a common goal of helping publishers succeed. We have a work culture that values expertise, ownership, and a collaborative spirit.
Job Overview- Java Backend- Lead Role :-
We are seeking a highly skilled and motivated Software Engineering Team Lead to join our dynamic team. The ideal candidate will have a strong technical background, proven leadership experience, and a passion for mentoring and developing a team of talented engineers. This role will be pivotal in driving the successful delivery of high-quality software solutions and fostering a collaborative and innovative work environment.
Exp- 5+ years
Location- New Delhi
Work Mode- Hybrid
Key Responsibilities:-
● Leadership and Mentorship: Lead, mentor, and develop a team of software engineers, fostering an environment of continuous improvement and professional growth.
● Project Management: Oversee the planning, execution, and delivery of software projects, ensuring they meet quality standards, timelines, and budget constraints.
● Technical Expertise: Provide technical guidance and expertise in software design, architecture, development, and best practices. Stay updated with the latest industry trends and technologies. Design, develop, and maintain high-quality applications, taking full, end-to-end ownership, including writing test cases, setting up monitoring, etc.
● Collaboration: Work closely with cross-functional teams to define project requirements, scope, and deliverables.
● Code Review and Quality Assurance: Conduct code reviews to ensure adherence to coding standards, best practices, and overall software quality. Implement and enforce quality assurance processes.
● Problem Solving: Identify, troubleshoot, and resolve technical challenges and bottlenecks. Provide innovative solutions to complex problems.
● Performance Management: Set clear performance expectations, provide regular feedback, and conduct performance evaluations for team members.
● Documentation: Ensure comprehensive documentation of code, processes, and project-related information.
Qualifications:-
● Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field.
● Experience: Minimum of 5 years of experience in software development.
● Technical Skills:
○ A strong body of prior backend work, successfully delivered in production. Experience building large volume data processing pipelines will be an added bonus.
○ Expertise in Core Java.
■ In-depth knowledge of the Java concurrency framework.
■ Sound knowledge of concepts like exception handling, garbage collection, and generics.
■ Experience in writing unit test cases, using any framework.
■ Hands-on experience with lambdas and streams.
■ Experience in using build tools like Maven and Ant.
○ Good understanding and Hands on experience of any Java frameworks e.g. SpringBoot, Vert.x will be an added advantage.
○ Good understanding of security best practices. ○ Hands-on experience with Low Level and High Level Design Practices and Patterns.
○ Hands on experience with any of the cloud platforms such as AWS, Azure, and Google Cloud.
○ Familiarity with containerization and orchestration tools like Docker, Kubernetes and Terraform.
○ Strong understanding of database technologies, both SQL (e.g., MySQL, PostgreSQL) and NoSQL (e.g., MongoDB, Couchbase).
○ Knowledge of DevOps practices and tools such as Jenkins, CI/CD.
○ Strong understanding of software development methodologies (e.g., Agile, Scrum).
● Leadership Skills: Proven ability to lead, mentor, and inspire a team of engineers. Excellent interpersonal and communication skills.
● Problem-Solving Skills: Strong analytical and problem-solving abilities. Ability to think critically and provide innovative solutions.
● Project Management: Experience in managing software projects from conception to delivery. Strong organizational and time-management skills.
● Collaboration: Ability to work effectively in a cross-functional team environment. Strong collaboration and stakeholder management skills.
● Adaptability: Ability to thrive in a fast-paced, dynamic environment and adapt to changing priorities and requirements.
Why Should You Work for AdPushup?
At AdPushup, we have
1. A culture of valuing our employees and promoting an autonomous, transparent, and ethical work environment.
2. Talented and supportive peers who value your contributions.
3. Challenging opportunities: learning happens outside the comfort-zone and that’s where our team likes to be - always pushing the boundaries and growing personally and professionally.
4. Flexibility to work from home: We believe in work & performance instead of measuring conventional benchmarks like work-hours.
5. Plenty of snacks and catered lunch.
6. Transparency: an open, honest and direct communication with co-workers and business associates.
Google Workspace Apps Developer
About the Role
Kinematic Digital is seeking an experienced Software Developer specializing in Google Workspace application development. The ideal candidate will create, maintain, and enhance custom applications and integrations within the Google Workspace ecosystem, including Google Docs, Sheets, Drive, Gmail, and Calendar.
Key Responsibilities
- Design and develop custom Google Workspace applications using Google Apps Script and Google Cloud Platform
- Create automation solutions and workflow improvements using Google Workspace APIs
- Build integrations between Google Workspace and other enterprise systems
- Implement security best practices and ensure compliance with Google's security guidelines
- Maintain and update existing Google Workspace applications and scripts
- Debug and optimize application performance
- Provide technical documentation and support training materials
Required Qualifications
- Bachelor's degree in Computer Science, Software Engineering, or related field
- 3+ years of experience in software development
- Strong proficiency in JavaScript and Google Apps Script
- Experience with Google Workspace APIs and SDK
- Knowledge of HTML5, CSS3, and modern web development practices
- Understanding of RESTful APIs and web services
- Experience with version control systems (Git)
- Strong problem-solving and analytical skills
Preferred Qualifications
- Google Cloud Platform certification
- Experience with Google Workspace Add-ons development
- Knowledge of OAuth 2.0 and security protocols
- Familiarity with Google Apps Script Advanced Services
- Experience with Node.js and modern JavaScript frameworks
- Background in enterprise software development
- Experience with Workspace administrative tasks and configurations
Technical Skills
- Languages: JavaScript, HTML5, CSS3
- Platforms: Google Apps Script, Google Cloud Platform
- APIs: Google Workspace APIs (Docs, Sheets, Drive, Gmail, Calendar)
- Tools: Google Cloud Console, Apps Script IDE, Git
- Security: OAuth 2.0, Google Cloud IAM
Required Experience with Google Workspace Development
- Creating custom functions and macros for Google Sheets
- Building automation workflows across Workspace applications
- Developing custom sidebars and dialogue interfaces
- Managing document permissions and sharing programmatically
- Implementing time-triggered and event-driven scripts
- Creating custom menus and user interfaces
- Working with Google Workspace Add-ons
Project Examples
The successful candidate will work on projects such as:
- Automated document generation and management systems
- Custom reporting and analytics dashboards
- Workflow automation between different Google Workspace applications
- Integration with third-party systems and databases
- Custom forms and data collection solutions
- Document approval and review systems
- Team collaboration tools and templates
Location
- Pune, Mumbai or Remote
CLOUDSUFI is a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Job type - Fulltime / Contract
Summary
We are looking for a Senior MLOps Engineer to support the AI CoE in building and scaling machine learning operations. This position requires both strategic oversight and direct involvement in MLOps infrastructure design, automation, and optimization. The person will lead a team while collaborating with various stakeholders to manage machine learning pipelines and model deployments in GCP / AWS / Azure. One of key parts of this role would also be managing data and models using data cataloging tools, ensuring that they are well- documented, versioned, and accessible for reuse and auditing.
Job Description:
⮚ Deploy models to production in GCP and own the model maintenance, monitoring and support activities
⮚ Split time between high-level strategy and hands-on technical implementation
⮚ Architect, build, and maintain scalable MLOps pipelines, with a focus on GCP / AWS / Azure services such as Vertex AI, GKE, Cloud Storage, and Big Query; stay up-to-date with the latest trends and advancements in MLOps
⮚ Implement and optimize CI/CD pipelines for machine learning model deployment, ensuring minimal downtime and streamlined processes
⮚ Work closely with data scientists and data engineers to ensure efficient data processing pipelines, model training, testing, and deployment
⮚ Manage data catalog tools for model and dataset versioning, lineage tracking, and governance. Ensure that all models and datasets are properly documented and discoverable
⮚ Develop automated systems for model monitoring, logging, and performance tracking in production environments
⮚ Lead the integration of data cataloging tools (e.g., Open MetaData), ensuring the traceability and versioning of both datasets and models.
Required Experience:
⮚ Bachelor’s degree in Computer Science, Engineering or similar quantitative disciplines
⮚ 4+ years of professional experience in MLOps or similar roles
⮚ Candidate should be able to able to write code in ML
⮚ Excellent analytical and problem-solving skills for technical challenges related to MLOps
⮚ Excellent English proficiency, presentation, and communication skills ⮚ Proven experience in deploying, monitoring, and managing machine learning models on GCP / AWS / Azure
⮚ Hands-on experience with data catalog tools
⮚ Expert in GCP / AWS / Azure services such as Vertex AI, GKE, BigQuery, and Cloud Build, Endpoint etc for building scalable ML infrastructure (GCP / AWS / Azure official Certifications are a huge plus) ⮚ Experience with model serving frameworks (e.g., TensorFlow Serving, TorchServe), and MLOps tools like Kubeflow, MLflow, or TFX
Senior Backend Developer
Job Overview: We are looking for a highly skilled and experienced Backend Developer who excels in building robust, scalable backend systems using multiple frameworks and languages. The ideal candidate will have 4+ years of experience working with at least two backend frameworks and be proficient in at least two programming languages such as Python, Node.js, or Go. As an Sr. Backend Developer, you will play a critical role in designing, developing, and maintaining backend services, ensuring seamless real-time communication with WebSockets, and optimizing system performance with tools like Redis, Celery, and Docker.
Key Responsibilities:
- Design, develop, and maintain backend systems using multiple frameworks and languages (Python, Node.js, Go).
- Build and integrate APIs, microservices, and other backend components.
- Implement real-time features using WebSockets and ensure efficient server-client communication.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Optimize backend systems for performance, scalability, and reliability.
- Troubleshoot and debug complex issues, providing efficient and scalable solutions.
- Work with caching systems like Redis to enhance performance and manage data.
- Utilize task queues and background job processing tools like Celery.
- Develop and deploy applications using containerization tools like Docker.
- Participate in code reviews and provide constructive feedback to ensure code quality.
- Mentor junior developers, sharing best practices and promoting a culture of continuous learning.
- Stay updated with the latest backend development trends and technologies to keep our solutions cutting-edge.
Required Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent work experience.
- 4+ years of professional experience as a Backend Developer.
- Proficiency in at least two programming languages: Python, Node.js, or Go.
- Experience working with multiple backend frameworks (e.g., Express, Flask, Gin, Fiber, FastAPI).
- Strong understanding of WebSockets and real-time communication.
- Hands-on experience with Redis for caching and data management.
- Familiarity with task queues like Celery for background job processing.
- Experience with Docker for containerizing applications and services.
- Strong knowledge of RESTful API design and implementation.
- Understanding of microservices architecture and distributed systems.
- Solid understanding of database technologies (SQL and NoSQL).
- Excellent problem-solving skills and attention to detail.
- Strong communication skills, both written and verbal.
Preferred Qualifications:
- Experience with cloud platforms such as AWS, Azure, or GCP.
- Familiarity with CI/CD pipelines and DevOps practices.
- Experience with GraphQL and other modern API paradigms.
- Familiarity with task queues, caching, or message brokers (e.g., Celery, Redis, RabbitMQ).
- Understanding of security best practices in backend development.
- Knowledge of automated testing frameworks for backend services.
- Familiarity with version control systems, particularly Git.
Job Title: DevOps Engineer
Location: Remote
Type: Full-time
About Us:
At Tese, we are committed to advancing sustainability through innovative technology solutions. Our platform empowers SMEs, financial institutions, and enterprises to achieve their Environmental, Social, and Governance (ESG) goals. We are looking for a skilled and passionate DevOps Engineer to join our team and help us build and maintain scalable, reliable, and efficient infrastructure.
Role Overview:
As a DevOps Engineer, you will be responsible for designing, implementing, and managing the infrastructure that supports our applications and services. You will work closely with our development, QA, and data science teams to ensure smooth deployment, continuous integration, and continuous delivery of our products. Your role will be critical in automating processes, enhancing system performance, and maintaining high availability.
Key Responsibilities:
- Infrastructure Management:
- Design, implement, and maintain scalable cloud infrastructure on platforms such as AWS, Google Cloud, or Azure.
- Manage server environments, including provisioning, monitoring, and maintenance.
- CI/CD Pipeline Development:
- Develop and maintain continuous integration and continuous deployment pipelines using tools like Jenkins, GitLab CI/CD, or CircleCI.
- Automate deployment processes to ensure quick and reliable releases.
- Configuration Management and Automation:
- Implement infrastructure as code (IaC) using tools like Terraform, Ansible, or CloudFormation.
- Automate system configurations and deployments to improve efficiency and reduce manual errors.
- Monitoring and Logging:
- Set up and manage monitoring tools (e.g., Prometheus, Grafana, ELK Stack) to track system performance and troubleshoot issues.
- Implement logging solutions to ensure effective incident response and system analysis.
- Security and Compliance:
- Ensure systems are secure and compliant with industry standards and regulations.
- Implement security best practices, including identity and access management, network security, and vulnerability assessments.
- Collaboration and Support:
- Work closely with development and QA teams to support application deployments and troubleshoot issues.
- Provide support for infrastructure-related inquiries and incidents.
Qualifications:
- Education:
- Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
- Experience:
- 3-5 years of experience in DevOps, system administration, or related roles.
- Hands-on experience with cloud platforms such as AWS, Google Cloud Platform, or Azure.
- Technical Skills:
- Proficiency in scripting languages like Bash, Python, or Ruby.
- Strong experience with containerization technologies like Docker and orchestration tools like Kubernetes.
- Knowledge of configuration management tools (Ansible, Puppet, Chef).
- Experience with CI/CD tools (Jenkins, GitLab CI/CD, CircleCI).
- Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK Stack).
- Understanding of networking concepts and security best practices.
- Soft Skills:
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration abilities.
- Ability to work in a fast-paced environment and manage multiple tasks.
Preferred Qualifications:
- Experience with infrastructure as code (IaC) tools like Terraform or CloudFormation.
- Knowledge of microservices architecture and serverless computing.
- Familiarity with database administration (SQL and NoSQL databases).
- Experience with Agile methodologies and working in a Scrum or Kanban environment.
- Passion for sustainability and interest in ESG initiatives.
Benefits:
- Competitive salary and benefits package,and performance bonuses.
- Flexible working hours and remote work options.
- Opportunity to work on impactful projects that promote sustainability.
- Professional development opportunities, including access to training and conferences.
About The Role:
The products/services of Eclat Engineering Pvt. Ltd. are being used by some of the leading institutions in India and abroad. Our services/Products are rapidly growing in demand. We are looking for a capable and dynamic Senior DevOps engineer to help setup, maintain and scale the infrastructure operations. This Individual will have the challenging responsibility of channelling our IT infrastructure and offering customer services with stringent international standard levels of service quality. This individual will leverage the latest IT tools to automate and streamline the delivery of our services while implementing industry-standard processes and knowledge management.
Roles & Responsibilities:
- Infrastructure and Deployment Automation: Design, implement, and maintain automation for infrastructure
provisioning and application deployment. Own the CI/CD pipelines and ensure they are efficient, reliable, and
scalable.
- System Monitoring and Performance: -Take ownership of monitoring systems and ensure the health and
performance of the infrastructure. Proactively identify and address performance bottlenecks and system issues.
- Cloud Infrastructure Management: Manage cloud infrastructure (e.g., AWS, Azure, GCP) and optimize resource
usage. Implement cost-saving measures while maintaining scalability and reliability.
- Configuration Management: Manage configuration management tools (e.g., Ansible, Puppet, Chef) to ensure
consistency across environments. Automate configuration changes and updates.
- Security and Compliance: Own security policies, implement best practices, and ensure compliance with industry
standards. Lead efforts to secure infrastructure and applications, including patch management and access controls.
- Collaboration with Development and Operations Teams: Foster collaboration between development and
operations teams, promoting a DevOps culture. Be the go-to person for resolving cross-functional infrastructure
issues and improving the development process.
- Disaster Recovery and Business Continuity: Develop and maintain disaster recovery plans and procedures. Ensure
business continuity in the event of system failures or other disruptions.
- Documentation and Knowledge Sharing: Create and maintain comprehensive documentation for configurations,
processes, and best practices. Share knowledge and mentor junior team members.
- Technical Leadership and Innovation: Stay up-to-date with industry trends and emerging technologies. Lead efforts
to introduce new tools and technologies that enhance DevOps practices.
- Problem Resolution and Troubleshooting: Be responsible for diagnosing and resolving complex issues related to
infrastructure and deployments. Implement preventive measures to reduce recurring problems.
Requirements:
● B.E / B.Tech / M.E / M.Tech / MCA / M.Sc.IT (if not should be able to demonstrate required skills)
● Overall 3+ years of experience in DevOps and Cloud operations specifically in AWS.
● Experience with Linux Administrator
● Experience with microservice architecture, containers, Kubernetes, and Helm is a must
● Experience in Configuration Management preferably Ansible
● Experience in Shell Scripting is a must
● Experience in developing and maintaining CI/CD processes using tools like Gitlab, Jenkins
● Experience in logging, monitoring and analytics
● An Understanding of writing Infrastructure as a Code using tools like Terraform
● Preferences - AWS, Kubernetes, Ansible
Must Have:
● Knowledge of AWS Cloud Platform.
● Good experience with microservice architecture, Kubernetes, helm and container-based technologies
● Hands-on experience with Ansible.
● Should have experience in working and maintaining CI/CD Processes.
● Hands-on experience in version control tools like GIT.
● Experience with monitoring tools such as Cloudwatch/Sysdig etc.
● Sound experience in administering Linux servers and Shell Scripting.
● Should have a good understanding of IT security and have the knowledge to secure production environments (OS and server software).
at appscrip
Key Responsibilities
AI Model Development
- Design and implement advanced Generative AI models (e.g., GPT-based, LLaMA, etc.) to support applications across various domains, including text generation, summarization, and conversational agents.
- Utilize tools like LangChain and LlamaIndex to build robust AI-powered systems, ensuring seamless integration with data sources, APIs, and databases.
Backend Development with FastAPI
- Develop and maintain fast, efficient, and scalable FastAPI services to expose AI models and algorithms via RESTful APIs.
- Ensure optimal performance and low-latency for API endpoints, focusing on real-time data processing.
Pipeline and Integration
- Build and optimize data processing pipelines for AI models, including ingestion, transformation, and indexing of large datasets using tools like LangChain and LlamaIndex.
- Integrate AI models with external services, databases, and other backend systems to create end-to-end solutions.
Collaboration with Cross-Functional Teams
- Collaborate with data scientists, machine learning engineers, and product teams to define project requirements, technical feasibility, and timelines.
- Work with front-end developers to integrate AI-powered functionalities into web applications.
Model Optimization and Fine-Tuning
- Fine-tune and optimize pre-trained Generative AI models to improve accuracy, performance, and scalability for specific business use cases.
- Ensure efficient deployment of models in production environments, addressing issues related to memory, latency, and resource management.
Documentation and Code Quality
- Maintain high standards of code quality, write clear, maintainable code, and conduct thorough unit and integration tests.
- Document AI model architectures, APIs, and workflows for future reference and onboarding of team members.
Research and Innovation
- Stay updated with the latest advancements in Generative AI, LangChain, and LlamaIndex, and actively contribute to the adoption of new techniques and technologies.
- Propose and explore innovative ways to leverage cutting-edge AI technologies to solve complex problems.
Required Skills and Experience
Expertise in Generative AI
Strong experience working with Generative AI models, including but not limited to GPT-3/4, LLaMA, or other large language models (LLMs).
LangChain & LlamaIndex
Hands-on experience with LangChain for building language model-driven applications, and LlamaIndex for efficient data indexing and querying.
Python Programming
Proficiency in Python for building AI applications, working with frameworks such as TensorFlow, PyTorch, Hugging Face, and others.
API Development with FastAPI
Strong experience developing RESTful APIs using FastAPI, with a focus on high-performance, scalable web services.
NLP & Machine Learning
Solid foundation in Natural Language Processing (NLP) and machine learning techniques, including data preprocessing, feature engineering, model evaluation, and fine-tuning.
Database & Storage Systems Familiarity with relational and NoSQL databases, data storage, and management strategies for large-scale AI datasets.
Version Control & CI/CD
Experience with Git, GitHub, and implementing CI/CD pipelines for seamless deployment.
Preferred Skills
Containerization & Cloud Deployment
Familiarity with Docker, Kubernetes, and cloud platforms (e.g., AWS, GCP, Azure) for deploying scalable AI applications.
Data Engineering
Experience in working with data pipelines and frameworks such as Apache Spark, Airflow, or Dask.
Knowledge of Front-End Technologies Familiarity with front-end frameworks (React, Vue.js, etc.) for integrating AI APIs with user-facing applications.
Job Description
The Opportunity
The Springboard engineering team is looking for software engineers with strong backend & frontend technical expertise. In this role, you would be responsible for building exciting features aimed at improving our student experience and expanding our student base, using the latest technologies like GenAI, as relevant. You would also contribute to making our platform more robust, flexible and scalable. This is a great opportunity to create a meaningful impact as well as grow in your career.
We are looking for engineers with different levels of experience and expertise. Depending on your proficiency levels, you will join our team as a Software Engineer II, Senior Software Engineer or Lead Software Engineer.
Responsibilities
- Design and develop features for the Springboard platform, which enriches the learning experience of thousands through human guided learning at scale
- Own quality and reliability of the product by getting hands on with code and design reviews, debugging complex issues and so on
- Contribute to the platform architecture through redesign of complex features based on evolving business needs
- Influence and establish best engineering practices through solid design decisions, processes and tools
- Provide technical mentoring to team members
You
- You have experience with web application development, on both, backend and frontend.
- You have a solid understanding of software design principles and best practices.
- You have hands-on experience in,
- Coding and debugging complex systems, with frontend integration.
- Code review, responsible for production deployments.
- Building scalable and fault-tolerant applications.
- Re-architecting / re-designing complex systems / features (i.e. managing technical debt).
- Defining and following best practices for frontend and backend systems.
- You have excellent problem solving skills and are comfortable handling ambiguity.
- You are able to analyze various alternatives and reach optimal decisions.
- You are willing to challenge the status quo, express your opinion and drive change.
- You are able to plan reasonably complex pieces of work and can handle changing priorities, unknowns and challenges with support. You want to contribute to the platform roadmap, aligning with the organization priorities and goals.
- You enjoy mentoring others and helping them solve challenging problems.
- You have excellent written and verbal communication skills with the ability to present complex technical information in a clear and concise manner. You are able to communicate with various stakeholders to understand their requirements.
- You are a proponent of quality - building best practices, introducing new processes and improvements to make the team more efficient.
Non-negotiables
Must have
- Expertise in Backend development (Python & Django experience preferred)
- Expertise in Frontend development (AngularJS / ReactJS / VueJS experience preferred)
- Experience working with SQL databases
- Experience building multiple significant features for web applications
Good to have
- Experience with Google Cloud Platform (or any cloud platform)
- Experience working with any Learning Management System (LMS), such as Canvas
- Experience working with GenAI ecosystem, including usage of AI tools such as code completion
- Experience with CI/CD pipelines and applications deployed on Kubernetes
- Experience with refactoring (redesigning complex systems / features, breaking monolith into services)
- Experience working with NoSQL databases
- Experience with Web performance optimization, SEO, Gatsby and FE Analytics
- Delivery skills, specifically planning open ended projects
- Mentoring skills
Expectations
- Able to work with open ended problems and come up with efficient solutions
- Able to communicate effectively with business stakeholders to clarify requirements for small to medium tasks and own end to end delivery
- Able to communicate estimations, plan deviations and blockers in an efficient and timely manner to all project stakeholders
About koolio.ai
Website: www.koolio.ai
koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond—easily. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.
About the Full-Time Position
We are seeking experienced Full Stack Developers to join our innovative team on a full-time, hybrid basis. As part of koolio.ai, you will work on a next-gen AI-powered platform, shaping the future of audio content creation. You’ll collaborate with cross-functional teams to deliver scalable, high-performance web applications, handling client- and server-side development. This role offers a unique opportunity to contribute to a rapidly growing platform with a global reach and thrive in a fast-moving, self-learning startup environment where adaptability and innovation are key.
Key Responsibilities:
- Collaborate with teams to implement new features, improve current systems, and troubleshoot issues as we scale
- Design and build efficient, secure, and modular client-side and server-side architecture
- Develop high-performance web applications with reusable and maintainable code
- Work with audio/video processing libraries for JavaScript to enhance multimedia content creation
- Integrate RESTful APIs with Google Cloud Services to build robust cloud-based applications
- Develop and optimize Cloud Functions to meet specific project requirements and enhance overall platform performance
Requirements and Skills:
- Education: Degree in Computer Science or a related field
- Work Experience: Minimum of 6+ years of proven experience as a Full Stack Developer or similar role, with demonstrable expertise in building web applications at scale
- Technical Skills:
- Proficiency in front-end languages such as HTML, CSS, JavaScript, jQuery, and ReactJS
- Strong experience with server-side technologies, particularly REST APIs, Python, Google Cloud Functions, and Google Cloud services
- Familiarity with NoSQL and PostgreSQL databases
- Experience working with audio/video processing libraries is a strong plus
- Soft Skills:
- Strong problem-solving skills and the ability to think critically about issues and solutions
- Excellent collaboration and communication skills, with the ability to work effectively in a remote, diverse, and distributed team environment
- Proactive, self-motivated, and able to work independently, balancing multiple tasks with minimal supervision
- Keen attention to detail and a passion for delivering high-quality, scalable solutions
- Other Skills: Familiarity with GitHub, CI/CD pipelines, and best practices in version control and continuous deployment
Compensation and Benefits:
- Total Yearly Compensation: ₹25 LPA based on skills and experience
- Health Insurance: Comprehensive health coverage provided by the company
- ESOPs: An opportunity for wealth creation and to grow alongside a fantastic team
Why Join Us?
- Be a part of a passionate and visionary team at the forefront of audio content creation
- Work on an exciting, evolving product that is reshaping the way audio content is created and consumed
- Thrive in a fast-moving, self-learning startup environment that values innovation, adaptability, and continuous improvement
- Enjoy the flexibility of a full-time hybrid position with opportunities to grow professionally and expand your skills
- Collaborate with talented professionals from around the world, contributing to a product that has a real-world impact
About HighLevel:
HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have 1000+ employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Website - https://www.gohighlevel.com/
YouTube Channel - https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g
Blog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/
Our Customers:
HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 450K million businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:
Work with scale, our infrastructure handles around 30 Billion+ API hits, 20 Billion+ message events, and more than 200 TeraBytes of data
About the role:
Seeking a seasoned Full Stack Developer with hands-on experience in Node.js and Vue.js (or React/Angular). You will be instrumental in building cutting-edge, AI-powered products along with mentoring or leading a team of engineers.
Team-Specific Focus Areas:
Conversations AI:
-Develop AI solutions for appointment booking, forms filling, sales, and intent recognition
-Ensure seamless integration and interaction with users through natural language processing and understanding
Workflows AI:
-Create and optimize AI-powered workflows to automate and streamline business processes
Voice AI:
-Focus on VOIP technology with an emphasis on low latency and high-quality voice interactions
-Fine-tune voice models for clarity, accuracy, and naturalness in various applications
Support AI:
-Integrate AI solutions with FreshDesk and ClickUp to enhance customer support and ticketing systems
-Develop tools for automated response generation, issue resolution, and workflow management
Platform AI:
-Oversee AI training, billing, content generation, funnels, image processing, and model evaluations
-Ensure scalable and efficient AI models that meet diverse platform needs and user demands
Responsibilities:
- REST APIs - Understanding REST philosophy. Writing secure, reusable, testable, and efficient APIs.
- Database - Designing collection schemas, and writing efficient queries
- Frontend - Developing user-facing features and integration with REST APIs
- UI/UX - Being consistent with the design principles and reusing components wherever possible
- Communication - With other team members, product team, and support team
Requirements:
- Expertise with large scale Conversation Agents along with Response Evaluations
- Good hands-on experience with Node.Js and Vue.js (or React/Angular)
- Experience of working with production-grade applications which a decent usage
- Bachelor's degree or equivalent experience in Engineering or related field of study
- 5+ years of engineering experience
- Expertise with MongoDB
- Proficient understanding of code versioning tools, such as Git
- Strong communication and problem-solving skills
EEO Statement:
At HighLevel, we value diversity. In fact, we understand it makes our organisation stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.
Key Responsibilities:
- Azure Cloud Sales & Solutioning: Lead Microsoft Azure cloud sales efforts across global regions, delivering solutions for applications, databases, and SAP servers based on customer requirements.
- Customer Engagement: Act as a trusted advisor for customers, leading them through their cloud transformation by understanding their requirements and recommending suitable cloud solutions.
- Lead Generation & Cost Optimization: Generate leads independently, provide cost-optimized Azure solutions, and continuously work to maximize value for clients.
- Sales Certifications: Hold basic Microsoft sales certifications (Foundation & Business Professional).
- Project Management: Oversee and manage Azure cloud projects, including setting up timelines, guiding technical teams, and communicating progress to customers. Ensure the successful completion of project objectives.
- Cloud Infrastructure Expertise: Maintain a deep understanding of Azure cloud infrastructure and services, including migrations, disaster recovery (DR), and cloud budgeting.
- Billing Management: Manage Azure billing processes, including subscription-based invoicing, purchase orders, renewals, license billing, and tracking expiration dates.
- Microsoft License Sales: Expert in selling Microsoft licenses such as SQL, Windows, and Office 365.
- Client Collaboration: Schedule meetings with internal teams and clients to align on project requirements and ensure effective communication.
- Customer Management: Track leads, follow up on calls, and ensure customer satisfaction by resolving issues and optimizing cloud resources. Provide regular updates on Microsoft technologies and programs.
- Field Sales: Participate in presales meetings and client visits to gather insights and propose cloud solutions.
- Internal Collaboration: Work closely with various internal departments to achieve project results and meet client expectations.
Qualifications:
- 1-3+ years of experience selling or consulting with corporate/public sector/ enterprise customers on Microsoft Azure cloud.
- Proficient in Azure cost optimization, cloud infrastructure, and sales of cloud solutions to end customers.
- Experience in generating leads and tracking sales progress.
- Project management experience with strong organizational skills.
- Ability to work collaboratively with internal teams and customers.
- Strong communication and problem-solving skills.
- SHIFT: DAY SHIFT
- WORKING DAYS: MON-SAT
- LOCATION: HYDERABAD
- WORK MODEL: WORK FROM THE OFFICE
REQUIRED QUALIFICATIONS:
- A degree in Computer Science or equivalent - Graduation
BENEFITS FROM THE COMPANY:
- High chance of Career Growth.
- Flexible working hours and the best infrastructure.
- Passionate Team Members surround you.
Role Overview:
We are seeking a highly skilled and motivated Data Scientist to join our growing team. The ideal candidate will be responsible for developing and deploying machine learning models from scratch to production level, focusing on building robust data-driven products. You will work closely with software engineers, product managers, and other stakeholders to ensure our AI-driven solutions meet the needs of our users and align with the company's strategic goals.
Key Responsibilities:
- Develop, implement, and optimize machine learning models and algorithms to support product development.
- Work on the end-to-end lifecycle of data science projects, including data collection, preprocessing, model training, evaluation, and deployment.
- Collaborate with cross-functional teams to define data requirements and product taxonomy.
- Design and build scalable data pipelines and systems to support real-time data processing and analysis.
- Ensure the accuracy and quality of data used for modeling and analytics.
- Monitor and evaluate the performance of deployed models, making necessary adjustments to maintain optimal results.
- Implement best practices for data governance, privacy, and security.
- Document processes, methodologies, and technical solutions to maintain transparency and reproducibility.
Qualifications:
- Bachelor's or Master's degree in Data Science, Computer Science, Engineering, or a related field.
- 5+ years of experience in data science, machine learning, or a related field, with a track record of developing and deploying products from scratch to production.
- Strong programming skills in Python and experience with data analysis and machine learning libraries (e.g., Pandas, NumPy, TensorFlow, PyTorch).
- Experience with cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker).
- Proficiency in building and optimizing data pipelines, ETL processes, and data storage solutions.
- Hands-on experience with data visualization tools and techniques.
- Strong understanding of statistics, data analysis, and machine learning concepts.
- Excellent problem-solving skills and attention to detail.
- Ability to work collaboratively in a fast-paced, dynamic environment.
Preferred Qualifications:
- Knowledge of microservices architecture and RESTful APIs.
- Familiarity with Agile development methodologies.
- Experience in building taxonomy for data products.
- Strong communication skills and the ability to explain complex technical concepts to non-technical stakeholders.
About Lean Technologies
Lean is on a mission to revolutionize the fintech industry by providing developers with a universal API to access their customers' financial accounts across the Middle East. We’re breaking down infrastructure barriers and empowering the growth of the fintech industry. With Sequoia leading our $33 million Series A round, Lean is poised to expand its coverage across the region while continuing to deliver unparalleled value to developers and stakeholders.
Join us and be part of a journey to enable the next generation of financial innovation. We offer competitive salaries, private healthcare, flexible office hours, and meaningful equity stakes to ensure long-term alignment. At Lean, you'll work on solving complex problems, build a lasting legacy, and be part of a diverse, inclusive, and equal opportunity workplace.
About the role:
Are you a highly motivated and experienced software engineer looking to take your career to the next level? Our team at Lean is seeking a talented engineer to help us build the distributed systems that allow our engineering teams to deploy our platform in multiple geographies across various deployment solutions. You will work closely with functional heads across software, QA, and product teams to deliver scalable and customizable release pipelines.
Responsibilities
- Distributed systems architecture – understand and manage the most complex systems
- Continual reliability and performance optimization – enhancing observability stack to improve proactive detection and resolution of issues
- Employing cutting-edge methods and technologies, continually refining existing tools to enhance performance and drive advancements
- Problem-solving capabilities – troubleshooting complex issues and proactively reducing toil through automation
- Experience in technical leadership and setting technical direction for engineering projects
- Collaboration skills – working across teams to drive change and provide guidance
- Technical expertise – depth skills and ability to act as subject matter expert in one or more of: IAAC, observability, coding, reliability, debugging, system design
- Capacity planning – effectively forecasting demand and reacting to changes
- Analyze and improve efficiency, scalability, and stability of various system resources
- Incident response – rapidly detecting and resolving critical incidents. Minimizing customer impact through effective collaboration, escalation (including periodic on-call shifts) and postmortems
Requirements
- 10+ years of experience in Systems Engineering, DevOps, or SRE roles running large-scale infrastructure, cloud, or web services
- Strong background in Linux/Unix Administration and networking concepts
- We work on OCI but would accept candidates with solid GCP/AWS or other cloud providers’ knowledge and experience
- 3+ years of experience with managing Kubernetes clusters, Helm, Docker
- Experience in operating CI/CD pipelines that build and deliver services on the cloud and on-premise
- Work with CI/CD tools/services like Jenkins/GitHub-Actions/ArgoCD etc.
- Experience with configuration management tools either Ansible, Chef, Puppet, or equivalent
- Infrastructure as Code - Terraform
- Experience in production environments with both relational and NoSQL databases
- Coding with one or more of the following: Java, Python, and/or Go
Bonus
- MultiCloud or Hybrid Cloud experience
- OCI and GCP
Why Join Us?
At Lean, we value talent, drive, and entrepreneurial spirit. We are constantly on the lookout for individuals who identify with our mission and values, even if they don’t meet every requirement. If you're passionate about solving hard problems and building a legacy, Lean is the right place for you. We are committed to equal employment opportunities regardless of race, color, ancestry, religion, gender, sexual orientation, or disability.
We are seeking a Cloud Architect for a Geocode Service Center Modernization Assessment and Implementation project. The primary objectives of the project are to migrate the legacy Geocode Service Center to a cloud-based solution. Initial efforts will be leading Assessments and Design efforts and ultimately, implementation of approved design.
Responsibilties:
- System Design and Architecture: Design and develop scalable, cloud-based geocoding systems that meet business requirements.
- Integration: Integrate geocoding services with existing cloud infrastructure and applications.
- Performance Optimization: Optimize system performance, ensuring high availability, reliability, and efficiency.
- Security: Implement robust security measures to protect geospatial data and ensure compliance with industry standards.
- Collaboration: Work closely with data scientists, developers, and other stakeholders to understand requirements and deliver solutions.
- Innovation: Stay updated with the latest trends and technologies in cloud computing and geospatial analysis to drive innovation.
- Documentation: Create and maintain comprehensive documentation for system architecture, processes, and configurations.
Requirements:
- Educational Background: Bachelor’s or Master’s degree in Computer Science, Information Technology, Geography, or a related field.
- Technical Proficiency: Extensive experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and geocoding tools like Precisely, ESRI etc.
- Programming Skills: Proficiency in programming languages such as Python, Java, or C#.
- Analytical Skills: Strong analytical and problem-solving skills to design efficient geocoding systems.
- Experience: Proven experience in designing and implementing cloud-based solutions, preferably with a focus on geospatial data.
- Communication Skills: Excellent communication and collaboration skills to work effectively with cross-functional teams.
- Certifications: Relevant certifications in cloud computing (e.g., AWS Certified Solutions Architect) and geospatial technologies are a plus.
Benefits:
- Work Location: Remote
- 5 days working
You can apply directly through the link:https://zrec.in/il0hc?source=CareerSite
Explore our Career Page for more such jobs : careers.infraveo.com
Wissen Technology is hiring for Devops engineer
Required:
-4 to 10 years of relevant experience in Devops
-Must have hands on experience on AWS, Kubernetes, CI/CD pipeline
-Good to have exposure on Github or Gitlab
-Open to work from hashtag Chennai
-Work mode will be Hybrid
Company profile:
Company Name : Wissen Technology
Group of companies in India : Wissen Technology & Wissen Infotech
Work Location - Chennai
Website : www.wissen.com
Wissen Thought leadership : https://lnkd.in/gvH6VBaU
LinkedIn: https://lnkd.in/gnK-vXjF
Position Overview: We are seeking a talented and experienced Cloud Engineer specialized in AWS cloud services to join our dynamic team. The ideal candidate will have a strong background in AWS infrastructure and services, including EC2, Elastic Load Balancing (ELB), Auto Scaling, S3, VPC, RDS, CloudFormation, CloudFront, Route 53, AWS Certificate Manager (ACM), and Terraform for Infrastructure as Code (IaC). Experience with other AWS services is a plus.
Responsibilities:
• Design, deploy, and maintain AWS infrastructure solutions, ensuring scalability, reliability, and security.
• Configure and manage EC2 instances to meet application requirements.
• Implement and manage Elastic Load Balancers (ELB) to distribute incoming traffic across multiple instances.
• Set up and manage AWS Auto Scaling to dynamically adjust resources based on demand.
• Configure and maintain VPCs, including subnets, route tables, and security groups, to control network traffic.
• Deploy and manage AWS CloudFormation and Terraform templates to automate infrastructure provisioning using Infrastructure as Code (IaC) principles.
• Implement and monitor S3 storage solutions for secure and scalable data storage
• Set up and manage CloudFront distributions for content delivery with low latency and high transfer speeds.
• Configure Route 53 for domain management, DNS routing, and failover configurations.
• Manage AWS Certificate Manager (ACM) for provisioning, managing, and deploying SSL/TLS certificates.
• Collaborate with cross-functional teams to understand business requirements and provide effective cloud solutions.
• Stay updated with the latest AWS technologies and best practices to drive continuous improvement.
Qualifications:
• Bachelor's degree in computer science, Information Technology, or a related field.
• Minimum of 2 years of relevant experience in designing, deploying, and managing AWS cloud solutions.
• Strong proficiency in AWS services such as EC2, ELB, Auto Scaling, VPC, S3, RDS, and CloudFormation.
• Experience with other AWS services such as Lambda, ECS, EKS, and DynamoDB is a plus.
• Solid understanding of cloud computing principles, including IaaS, PaaS, and SaaS.
• Excellent problem-solving skills and the ability to troubleshoot complex issues in a cloud environment.
• Strong communication skills with the ability to collaborate effectively with cross-functional teams.
• Relevant AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer, etc.) are highly desirable.
Additional Information:
• We value creativity, innovation, and a proactive approach to problem-solving.
• We offer a collaborative and supportive work environment where your ideas and contributions are valued.
• Opportunities for professional growth and development. Someshwara Software Pvt Ltd is an equal opportunity employer.
We celebrate diversity and are dedicated to creating an inclusive environment for all employees.
Job Description
We are seeking a talented DevOps Engineer to join our dynamic team. The ideal candidate will have a passion for building and maintaining cloud infrastructure while ensuring the reliability and efficiency of our applications. You will be responsible for deploying and maintaining cloud environments, enhancing CI/CD pipelines, and ensuring optimal performance through proactive monitoring and troubleshooting.
Roles and Responsibilities:
- Cloud Infrastructure: Deploy and maintain cloud infrastructure on Microsoft Azure or AWS, ensuring scalability and reliability.
- CI/CD Pipeline Enhancement: Continuously improve CI/CD pipelines and build robust development and production environments.
- Application Deployment: Manage application deployments, ensuring high reliability and minimal downtime.
- Monitoring: Monitor infrastructure health and perform application log analysis to identify and resolve issues proactively.
- Incident Management: Troubleshoot and debug incidents, collaborating closely with development teams to implement effective solutions.
- Infrastructure as Code: Enhance Ansible roles and Terraform modules, maintaining best practices for Infrastructure as Code (IaC).
- Tool Development: Write tools and utilities to streamline and improve infrastructure operations.
- SDLC Practices: Establish and uphold industry-standard Software Development Life Cycle (SDLC) practices with a strong focus on quality.
- On-call Support: Be available 24/7 for on-call incident management for production environments.
Requirements:
- Cloud Experience: Hands-on experience deploying and provisioning virtual machines on Microsoft Azure or Amazon AWS.
- Linux Administration: Proficient with Linux systems and basic system administration tasks.
- Networking Knowledge: Working knowledge of network fundamentals (Ethernet, TCP/IP, WAF, DNS, etc.).
- Scripting Skills: Proficient in BASH and at least one high-level scripting language (Python, Ruby, Perl).
- Tools Proficiency: Familiarity with tools such as Git, Nagios, Snort, and OpenVPN.
- Containerization: Strong experience with Docker and Kubernetes is mandatory.
- Communication Skills: Excellent interpersonal communication skills, with the ability to engage with peers, customers, vendors, and partners across all levels of the organization.
NASDAQ listed, Service Provider IT Company
Job Summary:
As a Cloud Architect at organization, you will play a pivotal role in designing, implementing, and maintaining our multi-cloud infrastructure. You will work closely with various teams to ensure our cloud solutions are scalable, secure, and efficient across different cloud providers. Your expertise in multi-cloud strategies, database management, and microservices architecture will be essential to our success.
Key Responsibilities:
- Design and implement scalable, secure, and high-performance cloud architectures across multiple cloud platforms (AWS, Azure, Google Cloud Platform).
- Lead and manage cloud migration projects, ensuring seamless transitions between on-premises and cloud environments.
- Develop and maintain cloud-native solutions leveraging services from various cloud providers.
- Architect and deploy microservices using REST, GraphQL to support our application development needs.
- Collaborate with DevOps and development teams to ensure best practices in continuous integration and deployment (CI/CD).
- Provide guidance on database architecture, including relational and NoSQL databases, ensuring optimal performance and security.
- Implement robust security practices and policies to protect cloud environments and data.
- Design and implement data management strategies, including data governance, data integration, and data security.
- Stay-up-to-date with the latest industry trends and emerging technologies to drive continuous improvement and innovation.
- Troubleshoot and resolve cloud infrastructure issues, ensuring high availability and reliability.
- Optimize cost and performance across different cloud environments.
Qualifications/ Experience & Skills Required:
- Bachelor's degree in Computer Science, Information Technology, or a related field.
- Experience: 10 - 15 Years
- Proven experience as a Cloud Architect or in a similar role, with a strong focus on multi-cloud environments.
- Expertise in cloud migration projects, both lift-and-shift and greenfield implementations.
- Strong knowledge of cloud-native solutions and microservices architecture.
- Proficiency in using GraphQL for designing and implementing APIs.
- Solid understanding of database technologies, including SQL, NoSQL, and cloud-based database solutions.
- Experience with DevOps practices and tools, including CI/CD pipelines.
- Excellent problem-solving skills and ability to troubleshoot complex issues.
- Strong communication and collaboration skills, with the ability to work effectively in a team environment.
- Deep understanding of cloud security practices and data protection regulations (e.g., GDPR, HIPAA).
- Experience with data management, including data governance, data integration, and data security.
Preferred Skills:
- Certifications in multiple cloud platforms (e.g., AWS Certified Solutions Architect, Google Certified Professional Cloud Architect, Microsoft Certified: Azure Solutions Architect).
- Experience with containerization technologies (Docker, Kubernetes).
- Familiarity with cloud cost management and optimization tools.
the forefront of innovation in the digital video industry
Responsibilities:
- Work with development teams and product managers to ideate software solutions
- Design client-side and server-side architecture
- Creating a well-informed cloud strategy and managing the adaptation process
- Evaluating cloud applications, hardware, and software
- Develop and manage well-functioning databases and applications Write effective APIs
- Participate in the entire application lifecycle, focusing on coding and debugging
- Write clean code to develop, maintain and manage functional web applications
- Get feedback from, and build solutions for, users and customers
- Participate in requirements, design, and code reviews
- Engage with customers to understand and solve their issues
- Collaborate with remote team on implementing new requirements and solving customer problems
- Focus on quality of deliverables with high accountability and commitment to program objectives
Required Skills:
- 7– 10 years of SW development experience
- Experience using Amazon Web Services (AWS), Microsoft Azure, Google Cloud, or other major cloud computing services.
- Strong skills in Containers, Kubernetes, Helm
- Proficiency in C#, .NET, PHP /Java technologies with an acumen for code analysis, debugging and problem solving
- Strong skills in Database Design(PostgreSQL or MySQL)
- Experience in Caching and message Queue
- Experience in REST API framework design
- Strong focus on high-quality and maintainable code
- Understanding of multithreading, memory management, object-oriented programming
Preferred skills:
- Experience in working with Linux OS
- Experience in Core Java programming
- Experience in working with JSP/Servlets, Struts, Spring / Spring Boot, Hibernate
- Experience in working with web technologies HTML,CSS
- Knowledge of source versioning tools particularly JIRA, Git, Stash, and Jenkins.
- Domain Knowledge of Video, Audio Codecs
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated senior Data Engineer from the manufacturing Industry with over four years of experience to join our team. The Senior Data Engineer will oversee the department’s data infrastructure, including developing a data model, integrating large amounts of data from different systems, building & enhancing a data lake-house & subsequent analytics environment, and writing scripts to facilitate data analysis. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required:
- Experience in the manufacturing industry (metal industry is a plus)
- 4+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- Architect and optimize complex data pipelines, leading the design and implementation of scalable data infrastructure, and ensuring data quality and reliability at scale
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache, and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical experience & skills that can extract actionable insights from raw data to help improve the business.
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have:
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
- Bachelor’s degree in computer science, Information Technology, Engineering, or a related field from top-tier Indian Institutes of Information Technology (IIITs).
- Benefits And Perks
- A culture that fosters innovation, creativity, continuous learning, and resilience
- Progressive leave policy promoting work-life balance
- Mentorship opportunities with highly qualified internal resources and industry-driven programs
- Multicultural peer groups and supportive workplace policies
- Annual workcation program allowing you to work from various scenic locations
- Experience the unique environment of a dynamic start-up
Why should you join TVARIT ?
Working at TVARIT, a deep-tech German IT startup, offers a unique blend of innovation, collaboration, and growth opportunities. We seek individuals eager to adapt and thrive in a rapidly evolving environment.
If this opportunity excites you and aligns with your career aspirations, we encourage you to apply today!
- Architectural Leadership:
- Design and architect robust, scalable, and high-performance Hadoop solutions.
- Define and implement data architecture strategies, standards, and processes.
- Collaborate with senior leadership to align data strategies with business goals.
- Technical Expertise:
- Develop and maintain complex data processing systems using Hadoop and its ecosystem (HDFS, YARN, MapReduce, Hive, HBase, Pig, etc.).
- Ensure optimal performance and scalability of Hadoop clusters.
- Oversee the integration of Hadoop solutions with existing data systems and third-party applications.
- Strategic Planning:
- Develop long-term plans for data architecture, considering emerging technologies and future trends.
- Evaluate and recommend new technologies and tools to enhance the Hadoop ecosystem.
- Lead the adoption of big data best practices and methodologies.
- Team Leadership and Collaboration:
- Mentor and guide data engineers and developers, fostering a culture of continuous improvement.
- Work closely with data scientists, analysts, and other stakeholders to understand requirements and deliver high-quality solutions.
- Ensure effective communication and collaboration across all teams involved in data projects.
- Project Management:
- Lead large-scale data projects from inception to completion, ensuring timely delivery and high quality.
- Manage project resources, budgets, and timelines effectively.
- Monitor project progress and address any issues or risks promptly.
- Data Governance and Security:
- Implement robust data governance policies and procedures to ensure data quality and compliance.
- Ensure data security and privacy by implementing appropriate measures and controls.
- Conduct regular audits and reviews of data systems to ensure compliance with industry standards and regulations.
Job Purpose and Impact
The DevOps Engineer is a key position to strengthen the security automation capabilities which have been identified as a critical area for growth and specialization within Global IT’s scope. As part of the Cyber Intelligence Operation’s DevOps Team, you will be helping shape our automation efforts by building, maintaining and supporting our security infrastructure.
Key Accountabilities
- Collaborate with internal and external partners to understand and evaluate business requirements.
- Implement modern engineering practices to ensure product quality.
- Provide designs, prototypes and implementations incorporating software engineering best practices, tools and monitoring according to industry standards.
- Write well-designed, testable and efficient code using full-stack engineering capability.
- Integrate software components into a fully functional software system.
- Independently solve moderately complex issues with minimal supervision, while escalating more complex issues to appropriate staff.
- Proficiency in at least one configuration management or orchestration tool, such as Ansible.
- Experience with cloud monitoring and logging services.
Qualifications
Minimum Qualifications
- Bachelor's degree in a related field or equivalent exp
- Knowledge of public cloud services & application programming interfaces
- Working exp with continuous integration and delivery practices
Preferred Qualifications
- 3-5 years of relevant exp whether in IT, IS, or software development
- Exp in:
- Code repositories such as Git
- Scripting languages (Python & PowerShell)
- Using Windows, Linux, Unix, and mobile platforms within cloud services such as AWS
- Cloud infrastructure as a service (IaaS) / platform as a service (PaaS), microservices, Docker containers, Kubernetes, Terraform, Jenkins
- Databases such as Postgres, SQL, Elastic
Job Description - Manager Sales
Min 15 years experience,
Should be experience from Sales of Cloud IT Saas Products portfolio which Savex deals with,
Team Management experience, leading cloud business including teams
Sales manager - Cloud Solutions
Reporting to Sr Management
Good personality
Distribution backgroung
Keen on Channel partners
Good database of OEMs and channel partners.
Age group - 35 to 45yrs
Male Candidate
Good communication
B2B Channel Sales
Location - Bangalore
If interested reply with cv and below details
Total exp -
Current ctc -
Exp ctc -
Np -
Current location -
Qualification -
Total exp Channel Sales -
What are the Cloud IT products, you have done sales for?
What is the Annual revenue generated through Sales ?
About the job
MangoApps builds enterprise products that make employees at organizations across the globe
more effective and productive in their day-to-day work. We seek tech pros, great
communicators, collaborators, and efficient team players for this role.
Job Description:
Experience: 5+yrs (Relevant experience as a SRE)
Open positions: 2
Job Responsibilities as a SRE
- Must have very strong experience in Linux (Ubuntu) administration
- Strong in network troubleshooting
- Experienced in handling and diagnosing the root cause of compute and database outages
- Strong experience required with cloud platforms, specifically Azure or GCP (proficiency in at least one is mandatory)
- Must have very strong experience in designing, implementing, and maintaining highly available and scalable systems
- Must have expertise in CloudWatch or similar log systems and troubleshooting using them
- Proficiency in scripting and programming languages such as Python, Go, or Bash is essential
- Familiarity with configuration management tools such as Ansible, Puppet, or Chef is required
- Must possess knowledge of database/SQL optimization and performance tuning.
- Respond promptly to and resolve incidents to minimize downtime
- Implement and manage infrastructure using IaC tools like Terraform, Ansible, or Cloud Formation
- Excellent problem-solving skills with a proactive approach to identifying and resolving issues are essential.
Experience: 5+ Years
• Experience in Core Java, Spring Boot
• Experience in microservices and angular
• Extensive experience in developing enterprise-scale systems for global organization. Should possess good architectural knowledge and be aware of enterprise application design patterns.
• Should be able to analyze, design, develop and test complex, low-latency client-facing applications.
• Good development experience with RDBMS in SQL Server, Postgres, Oracle or DB2
• Good knowledge of multi-threading
• Basic working knowledge of Unix/Linux
• Excellent problem solving and coding skills in Java
• Strong interpersonal, communication and analytical skills.
• Should be able to express their design ideas and thoughts
We are a technology company operating in the media space. We are the pioneers of robot journalism in India. We use the mix of AI-generated and human-edited content, across media formats, be it audio, video or text.
Our key products include India’s first explanatory journalism portal (NewsBytes), a content platform for developers (DevBytes), and a SaaS platform for content creators (YANTRA).
Our B2C media products are consumed by more than 50 million users in a month, while our AI-driven B2B content engine helps companies create text-based content at scale.
The company was started by IIT, IIM Ahmedabad alumni and Cornell University. It has raised institutional financing from well-renowned media-tech VC and a Germany-based media conglomerate.
We are hiring a talented DevOps Engineer with 3+ years of experience to join our team. If you're excited to be part of a winning team, we are a great place to grow your career.
Responsibilities
● Handle and optimise cloud (servers and CDN)
● Build monitoring tools for the infrastructure
● Perform a granular level of analysis and optimise usage
● Help migrate from a single cloud environment to multi-cloud strategy
● Monitor threats and explore building a protection layer
● Develop scripts to automate certain aspects of the deployment process
Requirements and Skills
● 0-2 years of experience as a DevOps Engineer
● Proficient with AWS and GCP
● A certification from relevant cloud companies
● Knowledge of PHP will be an advantage
● Working knowledge of databases and SQL
Job Title: Javascript Developers (Full-Stack Web)
On-site Location: NCTE, Dwarka, Delhi
Job Type: Full-Time
Company: Bharattech AI Pvt Ltd
Eligibility:
- 6 years of experience (minimum)
- B.E/B.Tech/M.E/M.Tech -or- MCA -or- M.Sc(IT or CS) -or- MS in Software Systems
About the Company:
Bharattech AI Pvt Ltd is a leader in providing innovative AI and data analytics solutions. We have partnered with the National Council for Teacher Education (NCTE), Delhi, to implement and develop their data analytics & MIS development lab, called VSK. We are looking for skilled Javascript Developers (Full-Stack Web) to join our team and contribute to this prestigious project.
Job Description:
Bharattech AI Pvt Ltd is seeking two Javascript Developers (Full-Stack Web) to join our team for an exciting project with NCTE, Delhi. As a Full-Stack Developer, you will play a crucial role in the development and integration of the VSK Web application and related systems.
Work Experience:
- Minimum 6 years' experience in Web apps, PWAs, Dashboards, or Website Development.
- Proven experience in the complete lifecycle of web application development.
- Demonstrated experience as a full-stack developer.
- Knowledge of either MERN, MEVN, or MEAN stack.
- Knowledge of popular frameworks (Express/Meteor/React/Vue/Angular etc.) for any of the stacks mentioned above.
Role and Responsibilities:
- Study the readily available client datasets and leverage them to run the VSK smoothly.
- Communicate with the Team Lead and Project Manager to capture software requirements.
- Develop high-level system design diagrams for program design, coding, testing, debugging, and documentation.
- Develop, update, and modify the VSK Web application/Web portal.
- Integrate existing software/applications with VSK using readily available APIs.
Skills and Competencies:
- Proficiency in full-stack development, including both front-end and back-end technologies.
- Strong knowledge of web application frameworks and development tools.
- Experience with API integration and software development best practices.
- Excellent problem-solving skills and attention to detail.
- Strong communication skills and the ability to work effectively in a team environment.
Why Join Us:
- Be a part of a cutting-edge project with a significant impact on the education sector.
- Work in a dynamic and collaborative environment with opportunities for professional growth.
- Competitive salary and benefits package.
Join Bharattech AI Pvt Ltd and contribute to transforming technological development at NCTE, Delhi!
Bharattech AI Pvt Ltd is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Responsibilities:
- You will develop tools and applications aligning to the best coding practices.
- You will perform technical analysis, design, development, and implementation of projects.
- You will write clear quality code for software and applications and perform test reviews.
- You will detect and troubleshoot software issues
- You will develop, implement, and test APIs
- You will adhere to industry best practices and contribute to internal coding standards
- You will manipulate images and videos based on project requirements.
Requirements:
- You have a strong passion for start-ups and the proactiveness to deliver
- You have hands-on experience building services using NodeJs, ExpressJs technologies
- You have hands-on experience of Mongo DB(NoSQL/SQL)database technologies.
- You are good at web technologies like React JS/Next JS, JavaScript, Typescript
- You are good at web technologies like Restful/SOAP web services
- You are good at caching and third-party integration
- You are strong in debugging and troubleshooting skills
- Experience with either AWS (Amazon Web Services) or GCP (Google Cloud Platform)
- If you have Knowledge of Python, and Chrome extension & DevOps development is a plus.
- You must be proficient in building scalable backend infrastructure software or distributed systems with exposure to Front-end and backend libraries/frameworks.
- Experience with Databases and microservices architecture is an advantage
- You should be able to push your limits and go beyond your role to scale the product
Go-getter attitude and can drive progress with very little guidance and short turnaround time
Role Description
This is a full-time client facing on-site role for a Data Scientist at UpSolve Solutions in Mumbai. The Data Scientist will be responsible for performing various day-to-day tasks, including data science, statistics, data analytics, data visualization, and data analysis. The role involves utilizing these skills to provide actionable insights to drive business decisions and solve complex problems.
Qualifications
- Data Science, Statistics, and Data Analytics skills
- Data Visualization and Data Analysis skills
- Strong problem-solving and critical thinking abilities
- Ability to work with large datasets and perform data preprocessing
- Proficiency in programming languages such as Python or R
- Experience with machine learning algorithms and predictive modeling
- Excellent communication and presentation skills
- Bachelor's or Master's degree in a relevant field (e.g., Computer Science, Statistics, Data Science)
- Experience in the field of video and text analytics is a plus
Key Responsibilities:
- Develop and Maintain CI/CD Pipelines: Design, implement, and manage CI/CD pipelines using GitOps practices.
- Kubernetes Management: Deploy, manage, and troubleshoot Kubernetes clusters to ensure high availability and scalability of applications.
- Cloud Infrastructure: Design, deploy, and manage cloud infrastructure on AWS, utilizing services such as EC2, S3, RDS, Lambda, and others.
- Infrastructure as Code: Implement and manage infrastructure using IaC tools like Terraform, CloudFormation, or similar.
- Monitoring and Logging: Set up and manage monitoring, logging, and alerting systems to ensure the health and performance of the infrastructure.
- Automation: Identify and automate repetitive tasks to improve efficiency and reliability.
- Security: Implement security best practices and ensure compliance with industry standards.
- Collaboration: Work closely with development, QA, and operations teams to ensure seamless integration and delivery of products.
Required Skills and Qualifications:
- Experience: 2-5 years of experience in a DevOps role.
- AWS: In-depth knowledge of AWS services and solutions.
- CI/CD Tools: Experience with CI/CD tools such as Jenkins, GitLab CI, CircleCI, or similar.
- GitOps Expertise: Proficient in GitOps methodologies and tools.
- Kubernetes: Strong hands-on experience with Kubernetes and container orchestration.
- Scripting and Automation: Proficient in scripting languages such as Bash, Python, or similar.
- Infrastructure as Code (IaC): Hands-on experience with IaC tools like Terraform, CloudFormation, or similar.
- Monitoring Tools: Familiarity with monitoring and logging tools like Prometheus, Grafana, ELK stack, or similar.
- Version Control: Strong understanding of version control systems, primarily Git.
- Problem-Solving: Excellent problem-solving and debugging skills.
- Collaboration: Ability to work in a fast-paced, collaborative environment.
- Education: Bachelor’s or master’s degree in computer science or a related field.