50+ Google Cloud Platform (GCP) Jobs in India
Apply to 50+ Google Cloud Platform (GCP) Jobs on CutShort.io. Find your next job, effortlessly. Browse Google Cloud Platform (GCP) Jobs and apply today!
About Job
We are seeking an experienced Data Engineer to join our data team. As a Senior Data Engineer, you will work on various data engineering tasks including designing and optimizing data pipelines, data modelling, and troubleshooting data issues. You will collaborate with other data team members, stakeholders, and data scientists to provide data-driven insights and solutions to the organization. Experience required is of 3+ Years.
Responsibilities:
Design and optimize data pipelines for various data sources
Design and implement efficient data storage and retrieval mechanisms
Develop data modelling solutions and data validation mechanisms
Troubleshoot data-related issues and recommend process improvements
Collaborate with data scientists and stakeholders to provide data-driven insights and solutions
Coach and mentor junior data engineers in the team
Skills Required:
3+ years of experience in data engineering or related field
Strong experience in designing and optimizing data pipelines, and data modelling
Strong proficiency in programming languages Python
Experience with big data technologies like Hadoop, Spark, and Hive
Experience with cloud data services such as AWS, Azure, and GCP
Strong experience with database technologies like SQL, NoSQL, and data warehousing
Knowledge of distributed computing and storage systems
Understanding of DevOps and power automate and Microsoft Fabric will be an added advantage
Strong analytical and problem-solving skills
Excellent communication and collaboration skills
Qualifications
Bachelor's degree in Computer Science, Data Science, or a Computer related field (Master's degree preferred)
About Us
CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration
for employment without regard to race, colour, religion, gender, gender identity or expression, sexual orientation and national origin status. We are dedicated to providing equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/
What we are looking for: Backend Lead Engineer
As a Backend Lead Engineer, you will play a pivotal role in driving the technical vision and execution of our product development team. You will lead a team of talented engineers, mentor their growth, and ensure the delivery of high-quality, scalable, and maintainable software solutions.
Responsibilities:
• Technical Leadership:
o Provide technical leadership and guidance to a team backend engineers.
o Mentor and develop engineers to enhance their skills and capabilities.
o Collaborate with product managers, designers, and other stakeholders to define
product requirements and technical solutions.
• Development:
o Design, develop, and maintain robust and scalable backend applications.
o Optimize application performance and scalability for large-scale deployment
o Write clean, efficient, and well-tested code that adheres to best practices.
o Stay up-to-date with the latest technologies and trends in web development.
• Project Management:
o Lead and manage software development projects from inception to deployment.
o Estimate project timelines, assign tasks, and track progress.
o Ensure timely delivery of high-quality software.
• Problem-Solving:
o Identify and troubleshoot technical issues.
o Develop innovative solutions to complex problems.
• Architecture Design:
o Design and implement scalable and maintainable software architectures.
o Ensure the security, performance, and reliability of our systems.
Qualifications:
• Bachelor's degree in Computer Science, Engineering, or a related field.
• 6+ years of experience in backend software development.
• Proven experience leading and mentoring engineering teams.
• Strong proficiency in backend technologies (Python, Node.js, Java).
• Experience with GCP and deployment at scale.
• Familiarity with database technologies (SQL, NoSQL).
• Excellent problem-solving and analytical skills.
• Strong communication and collaboration skills.
Preferred Qualifications:
• Experience with agile development methodologies (Scrum, Kanban).
• Knowledge of DevOps practices and tools.
• Experience with microservices architecture.
• Contributions to open-source projects.
• Experience in any oil
Leverage your expertise in Python to design and implement distributed systems for web scraping, ensuring robust data extraction from diverse web sources.
• Develop and optimize scalable, self-healing scraping frameworks, integrated with AI tools for intelligent automation of the data collection process.
• Implement monitoring, logging, and alerting mechanisms to ensure high availability and performance of distributed web scraping systems.
• Work with large-scale NoSQL databases (e.g., MongoDB) to store and query scraped data efficiently.
• Collaborate with cross-functional teams to research and implement innovative AI-driven solutions for data extraction and automation.
• Ensure data integrity and security while interacting with various web sources.
Required Skills: • Extensive experience with Python and web frameworks like Flask, FastAPI, or Django.
• Experience with AI tools and machine learning libraries to enhance and automate scraping processes
. • Solid understanding of building and maintaining distributed systems, with hands-on experience in parallel programming (multithreading, asynchronous, multiprocessing).
• Working knowledge of asynchronous queue systems like Redis, Celery, RabbitMQ, etc., to handle distributed scraping tasks.
• Proven experience with web mining, scraping tools(e.g., Scrapy, BeautifulSoup, Selenium), and handling dynamic content.
• Proficiency in working with NoSQL data storage systems like MongoDB, including querying and handling large datasets.
• Knowledge of working with variousfront-end technologies and how various websites are built
Key Responsibilities:
- Cloud Infrastructure Management: Oversee the deployment, scaling, and management of cloud infrastructure across platforms like AWS, GCP, and Azure. Ensure optimal configuration, security, and cost-effectiveness.
- Application Deployment and Maintenance: Responsible for deploying and maintaining web applications, particularly those built on Django and the MERN stack (MongoDB, Express.js, React, Node.js). This includes setting up CI/CD pipelines, monitoring performance, and troubleshooting.
- Automation and Optimization: Develop scripts and automation tools to streamline operations. Continuously seek ways to improve system efficiency and reduce downtime.
- Security Compliance: Ensure that all cloud deployments comply with relevant security standards and practices. Regularly conduct security audits and coordinate with security teams to address vulnerabilities.
- Collaboration and Support: Work closely with development teams to understand their needs and provide technical support. Act as a liaison between developers, IT staff, and management to ensure smooth operation and implementation of cloud solutions.
- Disaster Recovery and Backup: Implement and manage disaster recovery plans and backup strategies to ensure data integrity and availability.
- Performance Monitoring: Regularly monitor and report on the performance of cloud services and applications. Use data to make informed decisions about upgrades, scaling, and other changes.
Required Skills and Experience:
- Proven experience in managing cloud infrastructure on AWS, GCP, and Azure.
- Strong background in deploying and maintaining Django-based and MERN stack web applications.
- Expertise in automation tools and scripting languages.
- Solid understanding of network architecture and security protocols.
- Experience with continuous integration and deployment (CI/CD) methodologies.
- Excellent problem-solving abilities and a proactive approach to system optimization.
- Good communication skills for effective collaboration with various teams.
Desired Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Relevant certifications in AWS, GCP, or Azure are highly desirable.
- Minimum 5 years of experience in a DevOps or similar role, with a focus on cloud computing and web application deployment.
About HighLevel:
HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have 1000+ employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Website - https://www.gohighlevel.com/
YouTube Channel - https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g
Blog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/
Our Customers:
HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 500K businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:
We work at scale; our infrastructure handles around 3 Billion+ API hits & 2 Billion+ message events monthly and over 25M views of customer pages daily. We also handle over 80 Terabytes of data across 5 Databases.
About the Team:
The Expansion Products team is responsible for driving volumetric & usage based upgrades and upsells within the platform to maximize revenue potential (apart from the subscription revenue). We do this by building innovative products & features that solve real-world problems for agencies and allow them to consolidate their offering to their clients in a single platform packaged under their white-labled brand. The expansion products team focuses exclusively on products that can demonstrate adoption, drive up engagement in target segments and are easily monetizable. This team handles multiple product areas including Phone System, email system, online listing integration, WordPress Hosting, Memberships & Courses, Mobile Apps, etc.
About the Role:
We’re looking for a skilled Senior Software Engineer for Membership Platform and help us take our platform’s infrastructure to the next level. In this role, you'll focus on keeping our databases fast and reliable, improving and managing the infrastructure, and reducing technical debt so we can scale smoothly as we grow. You’ll play a key part in ensuring our platform is stable, secure, and easy for our product teams to work with. This is an exciting opportunity to work on large-scale systems and make a direct impact on the experience of millions of users.
Responsibilities:
- Optimize and manage scalable databases to ensure high performance and reliability.
- Automate and maintain infrastructure using IaC tools, CI/CD pipelines, and best security practices.
- Identify, prioritize, and address technical debt to improve performance and maintainability.
- Implement monitoring and observability solutions to support high availability and incident response.
- Collaborate with cross-functional teams and document processes, mentoring engineers and sharing knowledge.
Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
- 4+ years in platform engineering, with expertise in large-scale databases and infrastructure.
- Experience in Full stack engineering with Node.js and modern Javascript frameworks like Vue.js[preferred], React.js, Angular.
- Strong background in cloud platforms (AWS, GCP, or Azure)
- Proficient in building scalable applications and should be comfortable understanding the flow of the software
- Experience with relational/non-relational databases ex: MySQL / MongoDB / Firestore
- Experience with monitoring tools (e.g., Prometheus, Grafana) and containerization (Docker, Kubernetes a plus) and video streaming knowledge is a plus.
Client Located in Bangalore Location
Job Description-
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Job Title: Solution Architect ML, Cloud
Experience:5-10 years
Client Location: Bangalore
Work Location: Tokyo, Japan (Onsite)
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Job Title: Solution Architect (ML, Cloud)
Location: Tokyo, Japan (Onsite)
Experience: 5-10 years
Overview: We are looking for a skilled Solution Architect with expertise in Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes to join our team in Tokyo. The ideal candidate will be responsible for designing and implementing cutting-edge, scalable solutions while leveraging the latest technologies and best practices to meet business objectives.
Key Responsibilities:
Collaborate with stakeholders to understand business needs and develop scalable, efficient technical solutions.
Architect and implement complex systems integrating Machine Learning, Cloud platforms (AWS, Azure, Google Cloud), and Full Stack Development.
Lead the development and deployment of cloud-native applications using NoSQL databases, Python, and Kubernetes.
Design and optimize algorithms to improve performance, scalability, and reliability of solutions.
Review, validate, and refine architecture to ensure flexibility, scalability, and cost-efficiency.
Mentor development teams and ensure adherence to best practices for coding, testing, and deployment.
Contribute to the development of technical documentation and solution roadmaps.
Stay up-to-date with emerging technologies and continuously improve solution design processes.
Required Skills & Qualifications:
5-10 years of experience as a Solution Architect or similar role with expertise in ML, Cloud, and Full Stack Development.
Proficiency in at least two major cloud platforms (AWS, Azure, Google Cloud).
Solid experience with Kubernetes for container orchestration and deployment.
Hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB).
Expertise in Python and ML frameworks like TensorFlow, PyTorch, etc.
Practical experience implementing at least two real-world algorithms (e.g., classification, clustering, recommendation systems).
Strong knowledge of scalable architecture design and cloud-native application development.
Familiarity with CI/CD tools and DevOps practices.
Excellent problem-solving abilities and the ability to thrive in a fast-paced environment.
Strong communication and collaboration skills with cross-functional teams.
Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Qualifications:
Experience with microservices and containerization.
Knowledge of distributed systems and high-performance computing.
Cloud certifications (AWS Certified Solutions Architect, Google Cloud Professional Architect, etc.).
Familiarity with Agile methodologies and Scrum.
Japanese language proficiency is an added advantage (but not mandatory).
Skills : ML, Cloud (any two major clouds), algorithms (two algorithms must be implemented), full stack, kubernatics, no sql, Python
Responsibilities:
- Collaborate with stakeholders to understand business needs and translate them into scalable and efficient technical solutions.
- Design and implement complex systems involving Machine Learning, Cloud Computing (at least two major clouds such as AWS, Azure, or Google Cloud), and Full Stack Development.
- Lead the design, development, and deployment of cloud-native applications with a focus on NoSQL databases, Python, and Kubernetes.
- Implement algorithms and provide scalable solutions, with a focus on performance optimization and system reliability.
- Review, validate, and improve architectures to ensure high scalability, flexibility, and cost-efficiency in cloud environments.
- Guide and mentor development teams, ensuring best practices are followed in coding, testing, and deployment.
- Contribute to the development of technical documentation and roadmaps.
- Stay up-to-date with emerging technologies and propose enhancements to the solution design process.
Key Skills & Requirements:
- Proven experience 5-10 years) as a Solution Architect or similar role, with deep expertise in Machine Learning, Cloud Architecture, and Full Stack Development.
- Expertise in at least two major cloud platforms (AWS, Azure, Google Cloud).
- Solid experience with Kubernetes for container orchestration and deployment.
- Strong hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB, etc.).
- Proficiency in Python, including experience with ML frameworks (such as TensorFlow, PyTorch, etc.) and libraries for algorithm development.
- Must have implemented at least two algorithms (e.g., classification, clustering, recommendation systems, etc.) in real-world applications.
- Strong experience in designing scalable architectures and applications from the ground up.
- Experience with DevOps and automation tools for CI/CD pipelines.
- Excellent problem-solving skills and ability to work in a fast-paced environment.
- Strong communication skills and ability to collaborate with cross-functional teams.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Skills:
- Experience with microservices architecture and containerization.
- Knowledge of distributed systems and high-performance computing.
- Certifications in cloud platforms (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.).
- Familiarity with Agile methodologies and Scrum.
- Knowing Japanese Language is an additional advantage for the Candidate. Not mandatory.
The candidate should have a background in development/programming with experience in at least one of the following: .NET, Java (Spring Boot), ReactJS, or AngularJS.
Primary Skills:
- AWS or GCP Cloud
- DevOps CI/CD pipelines (e.g., Azure DevOps, Jenkins)
- Python/Bash/PowerShell scripting
Secondary Skills:
- Docker or Kubernetes
Client based at Bangalore location.
Job Title: Solution Architect
Work Location: Tokyo
Experience: 7-10 years
Number of Positions: 3
Job Description:
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Responsibilities:
- Collaborate with stakeholders to understand business needs and translate them into scalable and efficient technical solutions.
- Design and implement complex systems involving Machine Learning, Cloud Computing (at least two major clouds such as AWS, Azure, or Google Cloud), and Full Stack Development.
- Lead the design, development, and deployment of cloud-native applications with a focus on NoSQL databases, Python, and Kubernetes.
- Implement algorithms and provide scalable solutions, with a focus on performance optimization and system reliability.
- Review, validate, and improve architectures to ensure high scalability, flexibility, and cost-efficiency in cloud environments.
- Guide and mentor development teams, ensuring best practices are followed in coding, testing, and deployment.
- Contribute to the development of technical documentation and roadmaps.
- Stay up-to-date with emerging technologies and propose enhancements to the solution design process.
Key Skills & Requirements:
- Proven experience (7-10 years) as a Solution Architect or similar role, with deep expertise in Machine Learning, Cloud Architecture, and Full Stack Development.
- Expertise in at least two major cloud platforms (AWS, Azure, Google Cloud).
- Solid experience with Kubernetes for container orchestration and deployment.
- Strong hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB, etc.).
- Proficiency in Python, including experience with ML frameworks (such as TensorFlow, PyTorch, etc.) and libraries for algorithm development.
- Must have implemented at least two algorithms (e.g., classification, clustering, recommendation systems, etc.) in real-world applications.
- Strong experience in designing scalable architectures and applications from the ground up.
- Experience with DevOps and automation tools for CI/CD pipelines.
- Excellent problem-solving skills and ability to work in a fast-paced environment.
- Strong communication skills and ability to collaborate with cross-functional teams.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Skills:
- Experience with microservices architecture and containerization.
- Knowledge of distributed systems and high-performance computing.
- Certifications in cloud platforms (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.).
- Familiarity with Agile methodologies and Scrum.
- Knowing Japanese Language is an additional advantage for the Candidate. Not mandatory.
About the Company-
AdPushup is an award-winning ad revenue optimization platform and Google Certified Publishing Partner (GCPP), helping hundreds of web publishers grow their revenue using cutting-edge technology, premium demand partnerships, and proven ad ops expertise.
Our team is a mix of engineers, marketers, product evangelists, and customer success specialists, united by a common goal of helping publishers succeed. We have a work culture that values expertise, ownership, and a collaborative spirit.
Job Overview- Java Backend- Lead Role :-
We are seeking a highly skilled and motivated Software Engineering Team Lead to join our dynamic team. The ideal candidate will have a strong technical background, proven leadership experience, and a passion for mentoring and developing a team of talented engineers. This role will be pivotal in driving the successful delivery of high-quality software solutions and fostering a collaborative and innovative work environment.
Exp- 5+ years
Location- New Delhi
Work Mode- Hybrid
Key Responsibilities:-
● Leadership and Mentorship: Lead, mentor, and develop a team of software engineers, fostering an environment of continuous improvement and professional growth.
● Project Management: Oversee the planning, execution, and delivery of software projects, ensuring they meet quality standards, timelines, and budget constraints.
● Technical Expertise: Provide technical guidance and expertise in software design, architecture, development, and best practices. Stay updated with the latest industry trends and technologies. Design, develop, and maintain high-quality applications, taking full, end-to-end ownership, including writing test cases, setting up monitoring, etc.
● Collaboration: Work closely with cross-functional teams to define project requirements, scope, and deliverables.
● Code Review and Quality Assurance: Conduct code reviews to ensure adherence to coding standards, best practices, and overall software quality. Implement and enforce quality assurance processes.
● Problem Solving: Identify, troubleshoot, and resolve technical challenges and bottlenecks. Provide innovative solutions to complex problems.
● Performance Management: Set clear performance expectations, provide regular feedback, and conduct performance evaluations for team members.
● Documentation: Ensure comprehensive documentation of code, processes, and project-related information.
Qualifications:-
● Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field.
● Experience: Minimum of 5 years of experience in software development.
● Technical Skills:
○ A strong body of prior backend work, successfully delivered in production. Experience building large volume data processing pipelines will be an added bonus.
○ Expertise in Core Java.
■ In-depth knowledge of the Java concurrency framework.
■ Sound knowledge of concepts like exception handling, garbage collection, and generics.
■ Experience in writing unit test cases, using any framework.
■ Hands-on experience with lambdas and streams.
■ Experience in using build tools like Maven and Ant.
○ Good understanding and Hands on experience of any Java frameworks e.g. SpringBoot, Vert.x will be an added advantage.
○ Good understanding of security best practices. ○ Hands-on experience with Low Level and High Level Design Practices and Patterns.
○ Hands on experience with any of the cloud platforms such as AWS, Azure, and Google Cloud.
○ Familiarity with containerization and orchestration tools like Docker, Kubernetes and Terraform.
○ Strong understanding of database technologies, both SQL (e.g., MySQL, PostgreSQL) and NoSQL (e.g., MongoDB, Couchbase).
○ Knowledge of DevOps practices and tools such as Jenkins, CI/CD.
○ Strong understanding of software development methodologies (e.g., Agile, Scrum).
● Leadership Skills: Proven ability to lead, mentor, and inspire a team of engineers. Excellent interpersonal and communication skills.
● Problem-Solving Skills: Strong analytical and problem-solving abilities. Ability to think critically and provide innovative solutions.
● Project Management: Experience in managing software projects from conception to delivery. Strong organizational and time-management skills.
● Collaboration: Ability to work effectively in a cross-functional team environment. Strong collaboration and stakeholder management skills.
● Adaptability: Ability to thrive in a fast-paced, dynamic environment and adapt to changing priorities and requirements.
Why Should You Work for AdPushup?
At AdPushup, we have
1. A culture of valuing our employees and promoting an autonomous, transparent, and ethical work environment.
2. Talented and supportive peers who value your contributions.
3. Challenging opportunities: learning happens outside the comfort-zone and that’s where our team likes to be - always pushing the boundaries and growing personally and professionally.
4. Flexibility to work from home: We believe in work & performance instead of measuring conventional benchmarks like work-hours.
5. Plenty of snacks and catered lunch.
6. Transparency: an open, honest and direct communication with co-workers and business associates.
Google Workspace Apps Developer
About the Role
Kinematic Digital is seeking an experienced Software Developer specializing in Google Workspace application development. The ideal candidate will create, maintain, and enhance custom applications and integrations within the Google Workspace ecosystem, including Google Docs, Sheets, Drive, Gmail, and Calendar.
Key Responsibilities
- Design and develop custom Google Workspace applications using Google Apps Script and Google Cloud Platform
- Create automation solutions and workflow improvements using Google Workspace APIs
- Build integrations between Google Workspace and other enterprise systems
- Implement security best practices and ensure compliance with Google's security guidelines
- Maintain and update existing Google Workspace applications and scripts
- Debug and optimize application performance
- Provide technical documentation and support training materials
Required Qualifications
- Bachelor's degree in Computer Science, Software Engineering, or related field
- 3+ years of experience in software development
- Strong proficiency in JavaScript and Google Apps Script
- Experience with Google Workspace APIs and SDK
- Knowledge of HTML5, CSS3, and modern web development practices
- Understanding of RESTful APIs and web services
- Experience with version control systems (Git)
- Strong problem-solving and analytical skills
Preferred Qualifications
- Google Cloud Platform certification
- Experience with Google Workspace Add-ons development
- Knowledge of OAuth 2.0 and security protocols
- Familiarity with Google Apps Script Advanced Services
- Experience with Node.js and modern JavaScript frameworks
- Background in enterprise software development
- Experience with Workspace administrative tasks and configurations
Technical Skills
- Languages: JavaScript, HTML5, CSS3
- Platforms: Google Apps Script, Google Cloud Platform
- APIs: Google Workspace APIs (Docs, Sheets, Drive, Gmail, Calendar)
- Tools: Google Cloud Console, Apps Script IDE, Git
- Security: OAuth 2.0, Google Cloud IAM
Required Experience with Google Workspace Development
- Creating custom functions and macros for Google Sheets
- Building automation workflows across Workspace applications
- Developing custom sidebars and dialogue interfaces
- Managing document permissions and sharing programmatically
- Implementing time-triggered and event-driven scripts
- Creating custom menus and user interfaces
- Working with Google Workspace Add-ons
Project Examples
The successful candidate will work on projects such as:
- Automated document generation and management systems
- Custom reporting and analytics dashboards
- Workflow automation between different Google Workspace applications
- Integration with third-party systems and databases
- Custom forms and data collection solutions
- Document approval and review systems
- Team collaboration tools and templates
Location
- Pune, Mumbai or Remote
CLOUDSUFI is a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Job type - Fulltime / Contract
Summary
We are looking for a Senior MLOps Engineer to support the AI CoE in building and scaling machine learning operations. This position requires both strategic oversight and direct involvement in MLOps infrastructure design, automation, and optimization. The person will lead a team while collaborating with various stakeholders to manage machine learning pipelines and model deployments in GCP / AWS / Azure. One of key parts of this role would also be managing data and models using data cataloging tools, ensuring that they are well- documented, versioned, and accessible for reuse and auditing.
Job Description:
⮚ Deploy models to production in GCP and own the model maintenance, monitoring and support activities
⮚ Split time between high-level strategy and hands-on technical implementation
⮚ Architect, build, and maintain scalable MLOps pipelines, with a focus on GCP / AWS / Azure services such as Vertex AI, GKE, Cloud Storage, and Big Query; stay up-to-date with the latest trends and advancements in MLOps
⮚ Implement and optimize CI/CD pipelines for machine learning model deployment, ensuring minimal downtime and streamlined processes
⮚ Work closely with data scientists and data engineers to ensure efficient data processing pipelines, model training, testing, and deployment
⮚ Manage data catalog tools for model and dataset versioning, lineage tracking, and governance. Ensure that all models and datasets are properly documented and discoverable
⮚ Develop automated systems for model monitoring, logging, and performance tracking in production environments
⮚ Lead the integration of data cataloging tools (e.g., Open MetaData), ensuring the traceability and versioning of both datasets and models.
Required Experience:
⮚ Bachelor’s degree in Computer Science, Engineering or similar quantitative disciplines
⮚ 4+ years of professional experience in MLOps or similar roles
⮚ Candidate should be able to able to write code in ML
⮚ Excellent analytical and problem-solving skills for technical challenges related to MLOps
⮚ Excellent English proficiency, presentation, and communication skills ⮚ Proven experience in deploying, monitoring, and managing machine learning models on GCP / AWS / Azure
⮚ Hands-on experience with data catalog tools
⮚ Expert in GCP / AWS / Azure services such as Vertex AI, GKE, BigQuery, and Cloud Build, Endpoint etc for building scalable ML infrastructure (GCP / AWS / Azure official Certifications are a huge plus) ⮚ Experience with model serving frameworks (e.g., TensorFlow Serving, TorchServe), and MLOps tools like Kubeflow, MLflow, or TFX
Senior Backend Developer
Job Overview: We are looking for a highly skilled and experienced Backend Developer who excels in building robust, scalable backend systems using multiple frameworks and languages. The ideal candidate will have 4+ years of experience working with at least two backend frameworks and be proficient in at least two programming languages such as Python, Node.js, or Go. As an Sr. Backend Developer, you will play a critical role in designing, developing, and maintaining backend services, ensuring seamless real-time communication with WebSockets, and optimizing system performance with tools like Redis, Celery, and Docker.
Key Responsibilities:
- Design, develop, and maintain backend systems using multiple frameworks and languages (Python, Node.js, Go).
- Build and integrate APIs, microservices, and other backend components.
- Implement real-time features using WebSockets and ensure efficient server-client communication.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Optimize backend systems for performance, scalability, and reliability.
- Troubleshoot and debug complex issues, providing efficient and scalable solutions.
- Work with caching systems like Redis to enhance performance and manage data.
- Utilize task queues and background job processing tools like Celery.
- Develop and deploy applications using containerization tools like Docker.
- Participate in code reviews and provide constructive feedback to ensure code quality.
- Mentor junior developers, sharing best practices and promoting a culture of continuous learning.
- Stay updated with the latest backend development trends and technologies to keep our solutions cutting-edge.
Required Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent work experience.
- 4+ years of professional experience as a Backend Developer.
- Proficiency in at least two programming languages: Python, Node.js, or Go.
- Experience working with multiple backend frameworks (e.g., Express, Flask, Gin, Fiber, FastAPI).
- Strong understanding of WebSockets and real-time communication.
- Hands-on experience with Redis for caching and data management.
- Familiarity with task queues like Celery for background job processing.
- Experience with Docker for containerizing applications and services.
- Strong knowledge of RESTful API design and implementation.
- Understanding of microservices architecture and distributed systems.
- Solid understanding of database technologies (SQL and NoSQL).
- Excellent problem-solving skills and attention to detail.
- Strong communication skills, both written and verbal.
Preferred Qualifications:
- Experience with cloud platforms such as AWS, Azure, or GCP.
- Familiarity with CI/CD pipelines and DevOps practices.
- Experience with GraphQL and other modern API paradigms.
- Familiarity with task queues, caching, or message brokers (e.g., Celery, Redis, RabbitMQ).
- Understanding of security best practices in backend development.
- Knowledge of automated testing frameworks for backend services.
- Familiarity with version control systems, particularly Git.
Job Title: DevOps Engineer
Location: Remote
Type: Full-time
About Us:
At Tese, we are committed to advancing sustainability through innovative technology solutions. Our platform empowers SMEs, financial institutions, and enterprises to achieve their Environmental, Social, and Governance (ESG) goals. We are looking for a skilled and passionate DevOps Engineer to join our team and help us build and maintain scalable, reliable, and efficient infrastructure.
Role Overview:
As a DevOps Engineer, you will be responsible for designing, implementing, and managing the infrastructure that supports our applications and services. You will work closely with our development, QA, and data science teams to ensure smooth deployment, continuous integration, and continuous delivery of our products. Your role will be critical in automating processes, enhancing system performance, and maintaining high availability.
Key Responsibilities:
- Infrastructure Management:
- Design, implement, and maintain scalable cloud infrastructure on platforms such as AWS, Google Cloud, or Azure.
- Manage server environments, including provisioning, monitoring, and maintenance.
- CI/CD Pipeline Development:
- Develop and maintain continuous integration and continuous deployment pipelines using tools like Jenkins, GitLab CI/CD, or CircleCI.
- Automate deployment processes to ensure quick and reliable releases.
- Configuration Management and Automation:
- Implement infrastructure as code (IaC) using tools like Terraform, Ansible, or CloudFormation.
- Automate system configurations and deployments to improve efficiency and reduce manual errors.
- Monitoring and Logging:
- Set up and manage monitoring tools (e.g., Prometheus, Grafana, ELK Stack) to track system performance and troubleshoot issues.
- Implement logging solutions to ensure effective incident response and system analysis.
- Security and Compliance:
- Ensure systems are secure and compliant with industry standards and regulations.
- Implement security best practices, including identity and access management, network security, and vulnerability assessments.
- Collaboration and Support:
- Work closely with development and QA teams to support application deployments and troubleshoot issues.
- Provide support for infrastructure-related inquiries and incidents.
Qualifications:
- Education:
- Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
- Experience:
- 3-5 years of experience in DevOps, system administration, or related roles.
- Hands-on experience with cloud platforms such as AWS, Google Cloud Platform, or Azure.
- Technical Skills:
- Proficiency in scripting languages like Bash, Python, or Ruby.
- Strong experience with containerization technologies like Docker and orchestration tools like Kubernetes.
- Knowledge of configuration management tools (Ansible, Puppet, Chef).
- Experience with CI/CD tools (Jenkins, GitLab CI/CD, CircleCI).
- Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK Stack).
- Understanding of networking concepts and security best practices.
- Soft Skills:
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration abilities.
- Ability to work in a fast-paced environment and manage multiple tasks.
Preferred Qualifications:
- Experience with infrastructure as code (IaC) tools like Terraform or CloudFormation.
- Knowledge of microservices architecture and serverless computing.
- Familiarity with database administration (SQL and NoSQL databases).
- Experience with Agile methodologies and working in a Scrum or Kanban environment.
- Passion for sustainability and interest in ESG initiatives.
Benefits:
- Competitive salary and benefits package,and performance bonuses.
- Flexible working hours and remote work options.
- Opportunity to work on impactful projects that promote sustainability.
- Professional development opportunities, including access to training and conferences.
About The Role:
The products/services of Eclat Engineering Pvt. Ltd. are being used by some of the leading institutions in India and abroad. Our services/Products are rapidly growing in demand. We are looking for a capable and dynamic Senior DevOps engineer to help setup, maintain and scale the infrastructure operations. This Individual will have the challenging responsibility of channelling our IT infrastructure and offering customer services with stringent international standard levels of service quality. This individual will leverage the latest IT tools to automate and streamline the delivery of our services while implementing industry-standard processes and knowledge management.
Roles & Responsibilities:
- Infrastructure and Deployment Automation: Design, implement, and maintain automation for infrastructure
provisioning and application deployment. Own the CI/CD pipelines and ensure they are efficient, reliable, and
scalable.
- System Monitoring and Performance: -Take ownership of monitoring systems and ensure the health and
performance of the infrastructure. Proactively identify and address performance bottlenecks and system issues.
- Cloud Infrastructure Management: Manage cloud infrastructure (e.g., AWS, Azure, GCP) and optimize resource
usage. Implement cost-saving measures while maintaining scalability and reliability.
- Configuration Management: Manage configuration management tools (e.g., Ansible, Puppet, Chef) to ensure
consistency across environments. Automate configuration changes and updates.
- Security and Compliance: Own security policies, implement best practices, and ensure compliance with industry
standards. Lead efforts to secure infrastructure and applications, including patch management and access controls.
- Collaboration with Development and Operations Teams: Foster collaboration between development and
operations teams, promoting a DevOps culture. Be the go-to person for resolving cross-functional infrastructure
issues and improving the development process.
- Disaster Recovery and Business Continuity: Develop and maintain disaster recovery plans and procedures. Ensure
business continuity in the event of system failures or other disruptions.
- Documentation and Knowledge Sharing: Create and maintain comprehensive documentation for configurations,
processes, and best practices. Share knowledge and mentor junior team members.
- Technical Leadership and Innovation: Stay up-to-date with industry trends and emerging technologies. Lead efforts
to introduce new tools and technologies that enhance DevOps practices.
- Problem Resolution and Troubleshooting: Be responsible for diagnosing and resolving complex issues related to
infrastructure and deployments. Implement preventive measures to reduce recurring problems.
Requirements:
● B.E / B.Tech / M.E / M.Tech / MCA / M.Sc.IT (if not should be able to demonstrate required skills)
● Overall 3+ years of experience in DevOps and Cloud operations specifically in AWS.
● Experience with Linux Administrator
● Experience with microservice architecture, containers, Kubernetes, and Helm is a must
● Experience in Configuration Management preferably Ansible
● Experience in Shell Scripting is a must
● Experience in developing and maintaining CI/CD processes using tools like Gitlab, Jenkins
● Experience in logging, monitoring and analytics
● An Understanding of writing Infrastructure as a Code using tools like Terraform
● Preferences - AWS, Kubernetes, Ansible
Must Have:
● Knowledge of AWS Cloud Platform.
● Good experience with microservice architecture, Kubernetes, helm and container-based technologies
● Hands-on experience with Ansible.
● Should have experience in working and maintaining CI/CD Processes.
● Hands-on experience in version control tools like GIT.
● Experience with monitoring tools such as Cloudwatch/Sysdig etc.
● Sound experience in administering Linux servers and Shell Scripting.
● Should have a good understanding of IT security and have the knowledge to secure production environments (OS and server software).
at appscrip
Key Responsibilities
AI Model Development
- Design and implement advanced Generative AI models (e.g., GPT-based, LLaMA, etc.) to support applications across various domains, including text generation, summarization, and conversational agents.
- Utilize tools like LangChain and LlamaIndex to build robust AI-powered systems, ensuring seamless integration with data sources, APIs, and databases.
Backend Development with FastAPI
- Develop and maintain fast, efficient, and scalable FastAPI services to expose AI models and algorithms via RESTful APIs.
- Ensure optimal performance and low-latency for API endpoints, focusing on real-time data processing.
Pipeline and Integration
- Build and optimize data processing pipelines for AI models, including ingestion, transformation, and indexing of large datasets using tools like LangChain and LlamaIndex.
- Integrate AI models with external services, databases, and other backend systems to create end-to-end solutions.
Collaboration with Cross-Functional Teams
- Collaborate with data scientists, machine learning engineers, and product teams to define project requirements, technical feasibility, and timelines.
- Work with front-end developers to integrate AI-powered functionalities into web applications.
Model Optimization and Fine-Tuning
- Fine-tune and optimize pre-trained Generative AI models to improve accuracy, performance, and scalability for specific business use cases.
- Ensure efficient deployment of models in production environments, addressing issues related to memory, latency, and resource management.
Documentation and Code Quality
- Maintain high standards of code quality, write clear, maintainable code, and conduct thorough unit and integration tests.
- Document AI model architectures, APIs, and workflows for future reference and onboarding of team members.
Research and Innovation
- Stay updated with the latest advancements in Generative AI, LangChain, and LlamaIndex, and actively contribute to the adoption of new techniques and technologies.
- Propose and explore innovative ways to leverage cutting-edge AI technologies to solve complex problems.
Required Skills and Experience
Expertise in Generative AI
Strong experience working with Generative AI models, including but not limited to GPT-3/4, LLaMA, or other large language models (LLMs).
LangChain & LlamaIndex
Hands-on experience with LangChain for building language model-driven applications, and LlamaIndex for efficient data indexing and querying.
Python Programming
Proficiency in Python for building AI applications, working with frameworks such as TensorFlow, PyTorch, Hugging Face, and others.
API Development with FastAPI
Strong experience developing RESTful APIs using FastAPI, with a focus on high-performance, scalable web services.
NLP & Machine Learning
Solid foundation in Natural Language Processing (NLP) and machine learning techniques, including data preprocessing, feature engineering, model evaluation, and fine-tuning.
Database & Storage Systems Familiarity with relational and NoSQL databases, data storage, and management strategies for large-scale AI datasets.
Version Control & CI/CD
Experience with Git, GitHub, and implementing CI/CD pipelines for seamless deployment.
Preferred Skills
Containerization & Cloud Deployment
Familiarity with Docker, Kubernetes, and cloud platforms (e.g., AWS, GCP, Azure) for deploying scalable AI applications.
Data Engineering
Experience in working with data pipelines and frameworks such as Apache Spark, Airflow, or Dask.
Knowledge of Front-End Technologies Familiarity with front-end frameworks (React, Vue.js, etc.) for integrating AI APIs with user-facing applications.
Job Description
The Opportunity
The Springboard engineering team is looking for software engineers with strong backend & frontend technical expertise. In this role, you would be responsible for building exciting features aimed at improving our student experience and expanding our student base, using the latest technologies like GenAI, as relevant. You would also contribute to making our platform more robust, flexible and scalable. This is a great opportunity to create a meaningful impact as well as grow in your career.
We are looking for engineers with different levels of experience and expertise. Depending on your proficiency levels, you will join our team as a Software Engineer II, Senior Software Engineer or Lead Software Engineer.
Responsibilities
- Design and develop features for the Springboard platform, which enriches the learning experience of thousands through human guided learning at scale
- Own quality and reliability of the product by getting hands on with code and design reviews, debugging complex issues and so on
- Contribute to the platform architecture through redesign of complex features based on evolving business needs
- Influence and establish best engineering practices through solid design decisions, processes and tools
- Provide technical mentoring to team members
You
- You have experience with web application development, on both, backend and frontend.
- You have a solid understanding of software design principles and best practices.
- You have hands-on experience in,
- Coding and debugging complex systems, with frontend integration.
- Code review, responsible for production deployments.
- Building scalable and fault-tolerant applications.
- Re-architecting / re-designing complex systems / features (i.e. managing technical debt).
- Defining and following best practices for frontend and backend systems.
- You have excellent problem solving skills and are comfortable handling ambiguity.
- You are able to analyze various alternatives and reach optimal decisions.
- You are willing to challenge the status quo, express your opinion and drive change.
- You are able to plan reasonably complex pieces of work and can handle changing priorities, unknowns and challenges with support. You want to contribute to the platform roadmap, aligning with the organization priorities and goals.
- You enjoy mentoring others and helping them solve challenging problems.
- You have excellent written and verbal communication skills with the ability to present complex technical information in a clear and concise manner. You are able to communicate with various stakeholders to understand their requirements.
- You are a proponent of quality - building best practices, introducing new processes and improvements to make the team more efficient.
Non-negotiables
Must have
- Expertise in Backend development (Python & Django experience preferred)
- Expertise in Frontend development (AngularJS / ReactJS / VueJS experience preferred)
- Experience working with SQL databases
- Experience building multiple significant features for web applications
Good to have
- Experience with Google Cloud Platform (or any cloud platform)
- Experience working with any Learning Management System (LMS), such as Canvas
- Experience working with GenAI ecosystem, including usage of AI tools such as code completion
- Experience with CI/CD pipelines and applications deployed on Kubernetes
- Experience with refactoring (redesigning complex systems / features, breaking monolith into services)
- Experience working with NoSQL databases
- Experience with Web performance optimization, SEO, Gatsby and FE Analytics
- Delivery skills, specifically planning open ended projects
- Mentoring skills
Expectations
- Able to work with open ended problems and come up with efficient solutions
- Able to communicate effectively with business stakeholders to clarify requirements for small to medium tasks and own end to end delivery
- Able to communicate estimations, plan deviations and blockers in an efficient and timely manner to all project stakeholders
About koolio.ai
Website: www.koolio.ai
koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond—easily. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.
About the Full-Time Position
We are seeking experienced Full Stack Developers to join our innovative team on a full-time, hybrid basis. As part of koolio.ai, you will work on a next-gen AI-powered platform, shaping the future of audio content creation. You’ll collaborate with cross-functional teams to deliver scalable, high-performance web applications, handling client- and server-side development. This role offers a unique opportunity to contribute to a rapidly growing platform with a global reach and thrive in a fast-moving, self-learning startup environment where adaptability and innovation are key.
Key Responsibilities:
- Collaborate with teams to implement new features, improve current systems, and troubleshoot issues as we scale
- Design and build efficient, secure, and modular client-side and server-side architecture
- Develop high-performance web applications with reusable and maintainable code
- Work with audio/video processing libraries for JavaScript to enhance multimedia content creation
- Integrate RESTful APIs with Google Cloud Services to build robust cloud-based applications
- Develop and optimize Cloud Functions to meet specific project requirements and enhance overall platform performance
Requirements and Skills:
- Education: Degree in Computer Science or a related field
- Work Experience: Minimum of 6+ years of proven experience as a Full Stack Developer or similar role, with demonstrable expertise in building web applications at scale
- Technical Skills:
- Proficiency in front-end languages such as HTML, CSS, JavaScript, jQuery, and ReactJS
- Strong experience with server-side technologies, particularly REST APIs, Python, Google Cloud Functions, and Google Cloud services
- Familiarity with NoSQL and PostgreSQL databases
- Experience working with audio/video processing libraries is a strong plus
- Soft Skills:
- Strong problem-solving skills and the ability to think critically about issues and solutions
- Excellent collaboration and communication skills, with the ability to work effectively in a remote, diverse, and distributed team environment
- Proactive, self-motivated, and able to work independently, balancing multiple tasks with minimal supervision
- Keen attention to detail and a passion for delivering high-quality, scalable solutions
- Other Skills: Familiarity with GitHub, CI/CD pipelines, and best practices in version control and continuous deployment
Compensation and Benefits:
- Total Yearly Compensation: ₹25 LPA based on skills and experience
- Health Insurance: Comprehensive health coverage provided by the company
- ESOPs: An opportunity for wealth creation and to grow alongside a fantastic team
Why Join Us?
- Be a part of a passionate and visionary team at the forefront of audio content creation
- Work on an exciting, evolving product that is reshaping the way audio content is created and consumed
- Thrive in a fast-moving, self-learning startup environment that values innovation, adaptability, and continuous improvement
- Enjoy the flexibility of a full-time hybrid position with opportunities to grow professionally and expand your skills
- Collaborate with talented professionals from around the world, contributing to a product that has a real-world impact
About HighLevel:
HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have 1000+ employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Website - https://www.gohighlevel.com/
YouTube Channel - https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g
Blog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/
Our Customers:
HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 450K million businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:
Work with scale, our infrastructure handles around 30 Billion+ API hits, 20 Billion+ message events, and more than 200 TeraBytes of data
About the role:
Seeking a seasoned Full Stack Developer with hands-on experience in Node.js and Vue.js (or React/Angular). You will be instrumental in building cutting-edge, AI-powered products along with mentoring or leading a team of engineers.
Team-Specific Focus Areas:
Conversations AI:
-Develop AI solutions for appointment booking, forms filling, sales, and intent recognition
-Ensure seamless integration and interaction with users through natural language processing and understanding
Workflows AI:
-Create and optimize AI-powered workflows to automate and streamline business processes
Voice AI:
-Focus on VOIP technology with an emphasis on low latency and high-quality voice interactions
-Fine-tune voice models for clarity, accuracy, and naturalness in various applications
Support AI:
-Integrate AI solutions with FreshDesk and ClickUp to enhance customer support and ticketing systems
-Develop tools for automated response generation, issue resolution, and workflow management
Platform AI:
-Oversee AI training, billing, content generation, funnels, image processing, and model evaluations
-Ensure scalable and efficient AI models that meet diverse platform needs and user demands
Responsibilities:
- REST APIs - Understanding REST philosophy. Writing secure, reusable, testable, and efficient APIs.
- Database - Designing collection schemas, and writing efficient queries
- Frontend - Developing user-facing features and integration with REST APIs
- UI/UX - Being consistent with the design principles and reusing components wherever possible
- Communication - With other team members, product team, and support team
Requirements:
- Expertise with large scale Conversation Agents along with Response Evaluations
- Good hands-on experience with Node.Js and Vue.js (or React/Angular)
- Experience of working with production-grade applications which a decent usage
- Bachelor's degree or equivalent experience in Engineering or related field of study
- 5+ years of engineering experience
- Expertise with MongoDB
- Proficient understanding of code versioning tools, such as Git
- Strong communication and problem-solving skills
EEO Statement:
At HighLevel, we value diversity. In fact, we understand it makes our organisation stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.
Key Responsibilities:
- Azure Cloud Sales & Solutioning: Lead Microsoft Azure cloud sales efforts across global regions, delivering solutions for applications, databases, and SAP servers based on customer requirements.
- Customer Engagement: Act as a trusted advisor for customers, leading them through their cloud transformation by understanding their requirements and recommending suitable cloud solutions.
- Lead Generation & Cost Optimization: Generate leads independently, provide cost-optimized Azure solutions, and continuously work to maximize value for clients.
- Sales Certifications: Hold basic Microsoft sales certifications (Foundation & Business Professional).
- Project Management: Oversee and manage Azure cloud projects, including setting up timelines, guiding technical teams, and communicating progress to customers. Ensure the successful completion of project objectives.
- Cloud Infrastructure Expertise: Maintain a deep understanding of Azure cloud infrastructure and services, including migrations, disaster recovery (DR), and cloud budgeting.
- Billing Management: Manage Azure billing processes, including subscription-based invoicing, purchase orders, renewals, license billing, and tracking expiration dates.
- Microsoft License Sales: Expert in selling Microsoft licenses such as SQL, Windows, and Office 365.
- Client Collaboration: Schedule meetings with internal teams and clients to align on project requirements and ensure effective communication.
- Customer Management: Track leads, follow up on calls, and ensure customer satisfaction by resolving issues and optimizing cloud resources. Provide regular updates on Microsoft technologies and programs.
- Field Sales: Participate in presales meetings and client visits to gather insights and propose cloud solutions.
- Internal Collaboration: Work closely with various internal departments to achieve project results and meet client expectations.
Qualifications:
- 1-3+ years of experience selling or consulting with corporate/public sector/ enterprise customers on Microsoft Azure cloud.
- Proficient in Azure cost optimization, cloud infrastructure, and sales of cloud solutions to end customers.
- Experience in generating leads and tracking sales progress.
- Project management experience with strong organizational skills.
- Ability to work collaboratively with internal teams and customers.
- Strong communication and problem-solving skills.
- SHIFT: DAY SHIFT
- WORKING DAYS: MON-SAT
- LOCATION: HYDERABAD
- WORK MODEL: WORK FROM THE OFFICE
REQUIRED QUALIFICATIONS:
- A degree in Computer Science or equivalent - Graduation
BENEFITS FROM THE COMPANY:
- High chance of Career Growth.
- Flexible working hours and the best infrastructure.
- Passionate Team Members surround you.
Role Overview:
We are seeking a highly skilled and motivated Data Scientist to join our growing team. The ideal candidate will be responsible for developing and deploying machine learning models from scratch to production level, focusing on building robust data-driven products. You will work closely with software engineers, product managers, and other stakeholders to ensure our AI-driven solutions meet the needs of our users and align with the company's strategic goals.
Key Responsibilities:
- Develop, implement, and optimize machine learning models and algorithms to support product development.
- Work on the end-to-end lifecycle of data science projects, including data collection, preprocessing, model training, evaluation, and deployment.
- Collaborate with cross-functional teams to define data requirements and product taxonomy.
- Design and build scalable data pipelines and systems to support real-time data processing and analysis.
- Ensure the accuracy and quality of data used for modeling and analytics.
- Monitor and evaluate the performance of deployed models, making necessary adjustments to maintain optimal results.
- Implement best practices for data governance, privacy, and security.
- Document processes, methodologies, and technical solutions to maintain transparency and reproducibility.
Qualifications:
- Bachelor's or Master's degree in Data Science, Computer Science, Engineering, or a related field.
- 5+ years of experience in data science, machine learning, or a related field, with a track record of developing and deploying products from scratch to production.
- Strong programming skills in Python and experience with data analysis and machine learning libraries (e.g., Pandas, NumPy, TensorFlow, PyTorch).
- Experience with cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker).
- Proficiency in building and optimizing data pipelines, ETL processes, and data storage solutions.
- Hands-on experience with data visualization tools and techniques.
- Strong understanding of statistics, data analysis, and machine learning concepts.
- Excellent problem-solving skills and attention to detail.
- Ability to work collaboratively in a fast-paced, dynamic environment.
Preferred Qualifications:
- Knowledge of microservices architecture and RESTful APIs.
- Familiarity with Agile development methodologies.
- Experience in building taxonomy for data products.
- Strong communication skills and the ability to explain complex technical concepts to non-technical stakeholders.
About Lean Technologies
Lean is on a mission to revolutionize the fintech industry by providing developers with a universal API to access their customers' financial accounts across the Middle East. We’re breaking down infrastructure barriers and empowering the growth of the fintech industry. With Sequoia leading our $33 million Series A round, Lean is poised to expand its coverage across the region while continuing to deliver unparalleled value to developers and stakeholders.
Join us and be part of a journey to enable the next generation of financial innovation. We offer competitive salaries, private healthcare, flexible office hours, and meaningful equity stakes to ensure long-term alignment. At Lean, you'll work on solving complex problems, build a lasting legacy, and be part of a diverse, inclusive, and equal opportunity workplace.
About the role:
Are you a highly motivated and experienced software engineer looking to take your career to the next level? Our team at Lean is seeking a talented engineer to help us build the distributed systems that allow our engineering teams to deploy our platform in multiple geographies across various deployment solutions. You will work closely with functional heads across software, QA, and product teams to deliver scalable and customizable release pipelines.
Responsibilities
- Distributed systems architecture – understand and manage the most complex systems
- Continual reliability and performance optimization – enhancing observability stack to improve proactive detection and resolution of issues
- Employing cutting-edge methods and technologies, continually refining existing tools to enhance performance and drive advancements
- Problem-solving capabilities – troubleshooting complex issues and proactively reducing toil through automation
- Experience in technical leadership and setting technical direction for engineering projects
- Collaboration skills – working across teams to drive change and provide guidance
- Technical expertise – depth skills and ability to act as subject matter expert in one or more of: IAAC, observability, coding, reliability, debugging, system design
- Capacity planning – effectively forecasting demand and reacting to changes
- Analyze and improve efficiency, scalability, and stability of various system resources
- Incident response – rapidly detecting and resolving critical incidents. Minimizing customer impact through effective collaboration, escalation (including periodic on-call shifts) and postmortems
Requirements
- 10+ years of experience in Systems Engineering, DevOps, or SRE roles running large-scale infrastructure, cloud, or web services
- Strong background in Linux/Unix Administration and networking concepts
- We work on OCI but would accept candidates with solid GCP/AWS or other cloud providers’ knowledge and experience
- 3+ years of experience with managing Kubernetes clusters, Helm, Docker
- Experience in operating CI/CD pipelines that build and deliver services on the cloud and on-premise
- Work with CI/CD tools/services like Jenkins/GitHub-Actions/ArgoCD etc.
- Experience with configuration management tools either Ansible, Chef, Puppet, or equivalent
- Infrastructure as Code - Terraform
- Experience in production environments with both relational and NoSQL databases
- Coding with one or more of the following: Java, Python, and/or Go
Bonus
- MultiCloud or Hybrid Cloud experience
- OCI and GCP
Why Join Us?
At Lean, we value talent, drive, and entrepreneurial spirit. We are constantly on the lookout for individuals who identify with our mission and values, even if they don’t meet every requirement. If you're passionate about solving hard problems and building a legacy, Lean is the right place for you. We are committed to equal employment opportunities regardless of race, color, ancestry, religion, gender, sexual orientation, or disability.
We are seeking a Cloud Architect for a Geocode Service Center Modernization Assessment and Implementation project. The primary objectives of the project are to migrate the legacy Geocode Service Center to a cloud-based solution. Initial efforts will be leading Assessments and Design efforts and ultimately, implementation of approved design.
Responsibilties:
- System Design and Architecture: Design and develop scalable, cloud-based geocoding systems that meet business requirements.
- Integration: Integrate geocoding services with existing cloud infrastructure and applications.
- Performance Optimization: Optimize system performance, ensuring high availability, reliability, and efficiency.
- Security: Implement robust security measures to protect geospatial data and ensure compliance with industry standards.
- Collaboration: Work closely with data scientists, developers, and other stakeholders to understand requirements and deliver solutions.
- Innovation: Stay updated with the latest trends and technologies in cloud computing and geospatial analysis to drive innovation.
- Documentation: Create and maintain comprehensive documentation for system architecture, processes, and configurations.
Requirements:
- Educational Background: Bachelor’s or Master’s degree in Computer Science, Information Technology, Geography, or a related field.
- Technical Proficiency: Extensive experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and geocoding tools like Precisely, ESRI etc.
- Programming Skills: Proficiency in programming languages such as Python, Java, or C#.
- Analytical Skills: Strong analytical and problem-solving skills to design efficient geocoding systems.
- Experience: Proven experience in designing and implementing cloud-based solutions, preferably with a focus on geospatial data.
- Communication Skills: Excellent communication and collaboration skills to work effectively with cross-functional teams.
- Certifications: Relevant certifications in cloud computing (e.g., AWS Certified Solutions Architect) and geospatial technologies are a plus.
Benefits:
- Work Location: Remote
- 5 days working
You can apply directly through the link:https://zrec.in/il0hc?source=CareerSite
Explore our Career Page for more such jobs : careers.infraveo.com
Wissen Technology is hiring for Devops engineer
Required:
-4 to 10 years of relevant experience in Devops
-Must have hands on experience on AWS, Kubernetes, CI/CD pipeline
-Good to have exposure on Github or Gitlab
-Open to work from hashtag Chennai
-Work mode will be Hybrid
Company profile:
Company Name : Wissen Technology
Group of companies in India : Wissen Technology & Wissen Infotech
Work Location - Chennai
Website : www.wissen.com
Wissen Thought leadership : https://lnkd.in/gvH6VBaU
LinkedIn: https://lnkd.in/gnK-vXjF
Position Overview: We are seeking a talented and experienced Cloud Engineer specialized in AWS cloud services to join our dynamic team. The ideal candidate will have a strong background in AWS infrastructure and services, including EC2, Elastic Load Balancing (ELB), Auto Scaling, S3, VPC, RDS, CloudFormation, CloudFront, Route 53, AWS Certificate Manager (ACM), and Terraform for Infrastructure as Code (IaC). Experience with other AWS services is a plus.
Responsibilities:
• Design, deploy, and maintain AWS infrastructure solutions, ensuring scalability, reliability, and security.
• Configure and manage EC2 instances to meet application requirements.
• Implement and manage Elastic Load Balancers (ELB) to distribute incoming traffic across multiple instances.
• Set up and manage AWS Auto Scaling to dynamically adjust resources based on demand.
• Configure and maintain VPCs, including subnets, route tables, and security groups, to control network traffic.
• Deploy and manage AWS CloudFormation and Terraform templates to automate infrastructure provisioning using Infrastructure as Code (IaC) principles.
• Implement and monitor S3 storage solutions for secure and scalable data storage
• Set up and manage CloudFront distributions for content delivery with low latency and high transfer speeds.
• Configure Route 53 for domain management, DNS routing, and failover configurations.
• Manage AWS Certificate Manager (ACM) for provisioning, managing, and deploying SSL/TLS certificates.
• Collaborate with cross-functional teams to understand business requirements and provide effective cloud solutions.
• Stay updated with the latest AWS technologies and best practices to drive continuous improvement.
Qualifications:
• Bachelor's degree in computer science, Information Technology, or a related field.
• Minimum of 2 years of relevant experience in designing, deploying, and managing AWS cloud solutions.
• Strong proficiency in AWS services such as EC2, ELB, Auto Scaling, VPC, S3, RDS, and CloudFormation.
• Experience with other AWS services such as Lambda, ECS, EKS, and DynamoDB is a plus.
• Solid understanding of cloud computing principles, including IaaS, PaaS, and SaaS.
• Excellent problem-solving skills and the ability to troubleshoot complex issues in a cloud environment.
• Strong communication skills with the ability to collaborate effectively with cross-functional teams.
• Relevant AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer, etc.) are highly desirable.
Additional Information:
• We value creativity, innovation, and a proactive approach to problem-solving.
• We offer a collaborative and supportive work environment where your ideas and contributions are valued.
• Opportunities for professional growth and development. Someshwara Software Pvt Ltd is an equal opportunity employer.
We celebrate diversity and are dedicated to creating an inclusive environment for all employees.
Job Description
We are seeking a talented DevOps Engineer to join our dynamic team. The ideal candidate will have a passion for building and maintaining cloud infrastructure while ensuring the reliability and efficiency of our applications. You will be responsible for deploying and maintaining cloud environments, enhancing CI/CD pipelines, and ensuring optimal performance through proactive monitoring and troubleshooting.
Roles and Responsibilities:
- Cloud Infrastructure: Deploy and maintain cloud infrastructure on Microsoft Azure or AWS, ensuring scalability and reliability.
- CI/CD Pipeline Enhancement: Continuously improve CI/CD pipelines and build robust development and production environments.
- Application Deployment: Manage application deployments, ensuring high reliability and minimal downtime.
- Monitoring: Monitor infrastructure health and perform application log analysis to identify and resolve issues proactively.
- Incident Management: Troubleshoot and debug incidents, collaborating closely with development teams to implement effective solutions.
- Infrastructure as Code: Enhance Ansible roles and Terraform modules, maintaining best practices for Infrastructure as Code (IaC).
- Tool Development: Write tools and utilities to streamline and improve infrastructure operations.
- SDLC Practices: Establish and uphold industry-standard Software Development Life Cycle (SDLC) practices with a strong focus on quality.
- On-call Support: Be available 24/7 for on-call incident management for production environments.
Requirements:
- Cloud Experience: Hands-on experience deploying and provisioning virtual machines on Microsoft Azure or Amazon AWS.
- Linux Administration: Proficient with Linux systems and basic system administration tasks.
- Networking Knowledge: Working knowledge of network fundamentals (Ethernet, TCP/IP, WAF, DNS, etc.).
- Scripting Skills: Proficient in BASH and at least one high-level scripting language (Python, Ruby, Perl).
- Tools Proficiency: Familiarity with tools such as Git, Nagios, Snort, and OpenVPN.
- Containerization: Strong experience with Docker and Kubernetes is mandatory.
- Communication Skills: Excellent interpersonal communication skills, with the ability to engage with peers, customers, vendors, and partners across all levels of the organization.
NASDAQ listed, Service Provider IT Company
Job Summary:
As a Cloud Architect at organization, you will play a pivotal role in designing, implementing, and maintaining our multi-cloud infrastructure. You will work closely with various teams to ensure our cloud solutions are scalable, secure, and efficient across different cloud providers. Your expertise in multi-cloud strategies, database management, and microservices architecture will be essential to our success.
Key Responsibilities:
- Design and implement scalable, secure, and high-performance cloud architectures across multiple cloud platforms (AWS, Azure, Google Cloud Platform).
- Lead and manage cloud migration projects, ensuring seamless transitions between on-premises and cloud environments.
- Develop and maintain cloud-native solutions leveraging services from various cloud providers.
- Architect and deploy microservices using REST, GraphQL to support our application development needs.
- Collaborate with DevOps and development teams to ensure best practices in continuous integration and deployment (CI/CD).
- Provide guidance on database architecture, including relational and NoSQL databases, ensuring optimal performance and security.
- Implement robust security practices and policies to protect cloud environments and data.
- Design and implement data management strategies, including data governance, data integration, and data security.
- Stay-up-to-date with the latest industry trends and emerging technologies to drive continuous improvement and innovation.
- Troubleshoot and resolve cloud infrastructure issues, ensuring high availability and reliability.
- Optimize cost and performance across different cloud environments.
Qualifications/ Experience & Skills Required:
- Bachelor's degree in Computer Science, Information Technology, or a related field.
- Experience: 10 - 15 Years
- Proven experience as a Cloud Architect or in a similar role, with a strong focus on multi-cloud environments.
- Expertise in cloud migration projects, both lift-and-shift and greenfield implementations.
- Strong knowledge of cloud-native solutions and microservices architecture.
- Proficiency in using GraphQL for designing and implementing APIs.
- Solid understanding of database technologies, including SQL, NoSQL, and cloud-based database solutions.
- Experience with DevOps practices and tools, including CI/CD pipelines.
- Excellent problem-solving skills and ability to troubleshoot complex issues.
- Strong communication and collaboration skills, with the ability to work effectively in a team environment.
- Deep understanding of cloud security practices and data protection regulations (e.g., GDPR, HIPAA).
- Experience with data management, including data governance, data integration, and data security.
Preferred Skills:
- Certifications in multiple cloud platforms (e.g., AWS Certified Solutions Architect, Google Certified Professional Cloud Architect, Microsoft Certified: Azure Solutions Architect).
- Experience with containerization technologies (Docker, Kubernetes).
- Familiarity with cloud cost management and optimization tools.
the forefront of innovation in the digital video industry
Responsibilities:
- Work with development teams and product managers to ideate software solutions
- Design client-side and server-side architecture
- Creating a well-informed cloud strategy and managing the adaptation process
- Evaluating cloud applications, hardware, and software
- Develop and manage well-functioning databases and applications Write effective APIs
- Participate in the entire application lifecycle, focusing on coding and debugging
- Write clean code to develop, maintain and manage functional web applications
- Get feedback from, and build solutions for, users and customers
- Participate in requirements, design, and code reviews
- Engage with customers to understand and solve their issues
- Collaborate with remote team on implementing new requirements and solving customer problems
- Focus on quality of deliverables with high accountability and commitment to program objectives
Required Skills:
- 7– 10 years of SW development experience
- Experience using Amazon Web Services (AWS), Microsoft Azure, Google Cloud, or other major cloud computing services.
- Strong skills in Containers, Kubernetes, Helm
- Proficiency in C#, .NET, PHP /Java technologies with an acumen for code analysis, debugging and problem solving
- Strong skills in Database Design(PostgreSQL or MySQL)
- Experience in Caching and message Queue
- Experience in REST API framework design
- Strong focus on high-quality and maintainable code
- Understanding of multithreading, memory management, object-oriented programming
Preferred skills:
- Experience in working with Linux OS
- Experience in Core Java programming
- Experience in working with JSP/Servlets, Struts, Spring / Spring Boot, Hibernate
- Experience in working with web technologies HTML,CSS
- Knowledge of source versioning tools particularly JIRA, Git, Stash, and Jenkins.
- Domain Knowledge of Video, Audio Codecs
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated senior Data Engineer from the manufacturing Industry with over four years of experience to join our team. The Senior Data Engineer will oversee the department’s data infrastructure, including developing a data model, integrating large amounts of data from different systems, building & enhancing a data lake-house & subsequent analytics environment, and writing scripts to facilitate data analysis. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required:
- Experience in the manufacturing industry (metal industry is a plus)
- 4+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- Architect and optimize complex data pipelines, leading the design and implementation of scalable data infrastructure, and ensuring data quality and reliability at scale
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache, and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical experience & skills that can extract actionable insights from raw data to help improve the business.
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have:
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
- Bachelor’s degree in computer science, Information Technology, Engineering, or a related field from top-tier Indian Institutes of Information Technology (IIITs).
- Benefits And Perks
- A culture that fosters innovation, creativity, continuous learning, and resilience
- Progressive leave policy promoting work-life balance
- Mentorship opportunities with highly qualified internal resources and industry-driven programs
- Multicultural peer groups and supportive workplace policies
- Annual workcation program allowing you to work from various scenic locations
- Experience the unique environment of a dynamic start-up
Why should you join TVARIT ?
Working at TVARIT, a deep-tech German IT startup, offers a unique blend of innovation, collaboration, and growth opportunities. We seek individuals eager to adapt and thrive in a rapidly evolving environment.
If this opportunity excites you and aligns with your career aspirations, we encourage you to apply today!
- Architectural Leadership:
- Design and architect robust, scalable, and high-performance Hadoop solutions.
- Define and implement data architecture strategies, standards, and processes.
- Collaborate with senior leadership to align data strategies with business goals.
- Technical Expertise:
- Develop and maintain complex data processing systems using Hadoop and its ecosystem (HDFS, YARN, MapReduce, Hive, HBase, Pig, etc.).
- Ensure optimal performance and scalability of Hadoop clusters.
- Oversee the integration of Hadoop solutions with existing data systems and third-party applications.
- Strategic Planning:
- Develop long-term plans for data architecture, considering emerging technologies and future trends.
- Evaluate and recommend new technologies and tools to enhance the Hadoop ecosystem.
- Lead the adoption of big data best practices and methodologies.
- Team Leadership and Collaboration:
- Mentor and guide data engineers and developers, fostering a culture of continuous improvement.
- Work closely with data scientists, analysts, and other stakeholders to understand requirements and deliver high-quality solutions.
- Ensure effective communication and collaboration across all teams involved in data projects.
- Project Management:
- Lead large-scale data projects from inception to completion, ensuring timely delivery and high quality.
- Manage project resources, budgets, and timelines effectively.
- Monitor project progress and address any issues or risks promptly.
- Data Governance and Security:
- Implement robust data governance policies and procedures to ensure data quality and compliance.
- Ensure data security and privacy by implementing appropriate measures and controls.
- Conduct regular audits and reviews of data systems to ensure compliance with industry standards and regulations.
Job Purpose and Impact
The DevOps Engineer is a key position to strengthen the security automation capabilities which have been identified as a critical area for growth and specialization within Global IT’s scope. As part of the Cyber Intelligence Operation’s DevOps Team, you will be helping shape our automation efforts by building, maintaining and supporting our security infrastructure.
Key Accountabilities
- Collaborate with internal and external partners to understand and evaluate business requirements.
- Implement modern engineering practices to ensure product quality.
- Provide designs, prototypes and implementations incorporating software engineering best practices, tools and monitoring according to industry standards.
- Write well-designed, testable and efficient code using full-stack engineering capability.
- Integrate software components into a fully functional software system.
- Independently solve moderately complex issues with minimal supervision, while escalating more complex issues to appropriate staff.
- Proficiency in at least one configuration management or orchestration tool, such as Ansible.
- Experience with cloud monitoring and logging services.
Qualifications
Minimum Qualifications
- Bachelor's degree in a related field or equivalent exp
- Knowledge of public cloud services & application programming interfaces
- Working exp with continuous integration and delivery practices
Preferred Qualifications
- 3-5 years of relevant exp whether in IT, IS, or software development
- Exp in:
- Code repositories such as Git
- Scripting languages (Python & PowerShell)
- Using Windows, Linux, Unix, and mobile platforms within cloud services such as AWS
- Cloud infrastructure as a service (IaaS) / platform as a service (PaaS), microservices, Docker containers, Kubernetes, Terraform, Jenkins
- Databases such as Postgres, SQL, Elastic
Job Description - Manager Sales
Min 15 years experience,
Should be experience from Sales of Cloud IT Saas Products portfolio which Savex deals with,
Team Management experience, leading cloud business including teams
Sales manager - Cloud Solutions
Reporting to Sr Management
Good personality
Distribution backgroung
Keen on Channel partners
Good database of OEMs and channel partners.
Age group - 35 to 45yrs
Male Candidate
Good communication
B2B Channel Sales
Location - Bangalore
If interested reply with cv and below details
Total exp -
Current ctc -
Exp ctc -
Np -
Current location -
Qualification -
Total exp Channel Sales -
What are the Cloud IT products, you have done sales for?
What is the Annual revenue generated through Sales ?
About the job
MangoApps builds enterprise products that make employees at organizations across the globe
more effective and productive in their day-to-day work. We seek tech pros, great
communicators, collaborators, and efficient team players for this role.
Job Description:
Experience: 5+yrs (Relevant experience as a SRE)
Open positions: 2
Job Responsibilities as a SRE
- Must have very strong experience in Linux (Ubuntu) administration
- Strong in network troubleshooting
- Experienced in handling and diagnosing the root cause of compute and database outages
- Strong experience required with cloud platforms, specifically Azure or GCP (proficiency in at least one is mandatory)
- Must have very strong experience in designing, implementing, and maintaining highly available and scalable systems
- Must have expertise in CloudWatch or similar log systems and troubleshooting using them
- Proficiency in scripting and programming languages such as Python, Go, or Bash is essential
- Familiarity with configuration management tools such as Ansible, Puppet, or Chef is required
- Must possess knowledge of database/SQL optimization and performance tuning.
- Respond promptly to and resolve incidents to minimize downtime
- Implement and manage infrastructure using IaC tools like Terraform, Ansible, or Cloud Formation
- Excellent problem-solving skills with a proactive approach to identifying and resolving issues are essential.
Experience: 5+ Years
• Experience in Core Java, Spring Boot
• Experience in microservices and angular
• Extensive experience in developing enterprise-scale systems for global organization. Should possess good architectural knowledge and be aware of enterprise application design patterns.
• Should be able to analyze, design, develop and test complex, low-latency client-facing applications.
• Good development experience with RDBMS in SQL Server, Postgres, Oracle or DB2
• Good knowledge of multi-threading
• Basic working knowledge of Unix/Linux
• Excellent problem solving and coding skills in Java
• Strong interpersonal, communication and analytical skills.
• Should be able to express their design ideas and thoughts
We are a technology company operating in the media space. We are the pioneers of robot journalism in India. We use the mix of AI-generated and human-edited content, across media formats, be it audio, video or text.
Our key products include India’s first explanatory journalism portal (NewsBytes), a content platform for developers (DevBytes), and a SaaS platform for content creators (YANTRA).
Our B2C media products are consumed by more than 50 million users in a month, while our AI-driven B2B content engine helps companies create text-based content at scale.
The company was started by IIT, IIM Ahmedabad alumni and Cornell University. It has raised institutional financing from well-renowned media-tech VC and a Germany-based media conglomerate.
We are hiring a talented DevOps Engineer with 3+ years of experience to join our team. If you're excited to be part of a winning team, we are a great place to grow your career.
Responsibilities
● Handle and optimise cloud (servers and CDN)
● Build monitoring tools for the infrastructure
● Perform a granular level of analysis and optimise usage
● Help migrate from a single cloud environment to multi-cloud strategy
● Monitor threats and explore building a protection layer
● Develop scripts to automate certain aspects of the deployment process
Requirements and Skills
● 0-2 years of experience as a DevOps Engineer
● Proficient with AWS and GCP
● A certification from relevant cloud companies
● Knowledge of PHP will be an advantage
● Working knowledge of databases and SQL
Job Title: Javascript Developers (Full-Stack Web)
On-site Location: NCTE, Dwarka, Delhi
Job Type: Full-Time
Company: Bharattech AI Pvt Ltd
Eligibility:
- 6 years of experience (minimum)
- B.E/B.Tech/M.E/M.Tech -or- MCA -or- M.Sc(IT or CS) -or- MS in Software Systems
About the Company:
Bharattech AI Pvt Ltd is a leader in providing innovative AI and data analytics solutions. We have partnered with the National Council for Teacher Education (NCTE), Delhi, to implement and develop their data analytics & MIS development lab, called VSK. We are looking for skilled Javascript Developers (Full-Stack Web) to join our team and contribute to this prestigious project.
Job Description:
Bharattech AI Pvt Ltd is seeking two Javascript Developers (Full-Stack Web) to join our team for an exciting project with NCTE, Delhi. As a Full-Stack Developer, you will play a crucial role in the development and integration of the VSK Web application and related systems.
Work Experience:
- Minimum 6 years' experience in Web apps, PWAs, Dashboards, or Website Development.
- Proven experience in the complete lifecycle of web application development.
- Demonstrated experience as a full-stack developer.
- Knowledge of either MERN, MEVN, or MEAN stack.
- Knowledge of popular frameworks (Express/Meteor/React/Vue/Angular etc.) for any of the stacks mentioned above.
Role and Responsibilities:
- Study the readily available client datasets and leverage them to run the VSK smoothly.
- Communicate with the Team Lead and Project Manager to capture software requirements.
- Develop high-level system design diagrams for program design, coding, testing, debugging, and documentation.
- Develop, update, and modify the VSK Web application/Web portal.
- Integrate existing software/applications with VSK using readily available APIs.
Skills and Competencies:
- Proficiency in full-stack development, including both front-end and back-end technologies.
- Strong knowledge of web application frameworks and development tools.
- Experience with API integration and software development best practices.
- Excellent problem-solving skills and attention to detail.
- Strong communication skills and the ability to work effectively in a team environment.
Why Join Us:
- Be a part of a cutting-edge project with a significant impact on the education sector.
- Work in a dynamic and collaborative environment with opportunities for professional growth.
- Competitive salary and benefits package.
Join Bharattech AI Pvt Ltd and contribute to transforming technological development at NCTE, Delhi!
Bharattech AI Pvt Ltd is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Responsibilities:
- You will develop tools and applications aligning to the best coding practices.
- You will perform technical analysis, design, development, and implementation of projects.
- You will write clear quality code for software and applications and perform test reviews.
- You will detect and troubleshoot software issues
- You will develop, implement, and test APIs
- You will adhere to industry best practices and contribute to internal coding standards
- You will manipulate images and videos based on project requirements.
Requirements:
- You have a strong passion for start-ups and the proactiveness to deliver
- You have hands-on experience building services using NodeJs, ExpressJs technologies
- You have hands-on experience of Mongo DB(NoSQL/SQL)database technologies.
- You are good at web technologies like React JS/Next JS, JavaScript, Typescript
- You are good at web technologies like Restful/SOAP web services
- You are good at caching and third-party integration
- You are strong in debugging and troubleshooting skills
- Experience with either AWS (Amazon Web Services) or GCP (Google Cloud Platform)
- If you have Knowledge of Python, and Chrome extension & DevOps development is a plus.
- You must be proficient in building scalable backend infrastructure software or distributed systems with exposure to Front-end and backend libraries/frameworks.
- Experience with Databases and microservices architecture is an advantage
- You should be able to push your limits and go beyond your role to scale the product
Go-getter attitude and can drive progress with very little guidance and short turnaround time
Role Description
This is a full-time client facing on-site role for a Data Scientist at UpSolve Solutions in Mumbai. The Data Scientist will be responsible for performing various day-to-day tasks, including data science, statistics, data analytics, data visualization, and data analysis. The role involves utilizing these skills to provide actionable insights to drive business decisions and solve complex problems.
Qualifications
- Data Science, Statistics, and Data Analytics skills
- Data Visualization and Data Analysis skills
- Strong problem-solving and critical thinking abilities
- Ability to work with large datasets and perform data preprocessing
- Proficiency in programming languages such as Python or R
- Experience with machine learning algorithms and predictive modeling
- Excellent communication and presentation skills
- Bachelor's or Master's degree in a relevant field (e.g., Computer Science, Statistics, Data Science)
- Experience in the field of video and text analytics is a plus
Key Responsibilities:
- Develop and Maintain CI/CD Pipelines: Design, implement, and manage CI/CD pipelines using GitOps practices.
- Kubernetes Management: Deploy, manage, and troubleshoot Kubernetes clusters to ensure high availability and scalability of applications.
- Cloud Infrastructure: Design, deploy, and manage cloud infrastructure on AWS, utilizing services such as EC2, S3, RDS, Lambda, and others.
- Infrastructure as Code: Implement and manage infrastructure using IaC tools like Terraform, CloudFormation, or similar.
- Monitoring and Logging: Set up and manage monitoring, logging, and alerting systems to ensure the health and performance of the infrastructure.
- Automation: Identify and automate repetitive tasks to improve efficiency and reliability.
- Security: Implement security best practices and ensure compliance with industry standards.
- Collaboration: Work closely with development, QA, and operations teams to ensure seamless integration and delivery of products.
Required Skills and Qualifications:
- Experience: 2-5 years of experience in a DevOps role.
- AWS: In-depth knowledge of AWS services and solutions.
- CI/CD Tools: Experience with CI/CD tools such as Jenkins, GitLab CI, CircleCI, or similar.
- GitOps Expertise: Proficient in GitOps methodologies and tools.
- Kubernetes: Strong hands-on experience with Kubernetes and container orchestration.
- Scripting and Automation: Proficient in scripting languages such as Bash, Python, or similar.
- Infrastructure as Code (IaC): Hands-on experience with IaC tools like Terraform, CloudFormation, or similar.
- Monitoring Tools: Familiarity with monitoring and logging tools like Prometheus, Grafana, ELK stack, or similar.
- Version Control: Strong understanding of version control systems, primarily Git.
- Problem-Solving: Excellent problem-solving and debugging skills.
- Collaboration: Ability to work in a fast-paced, collaborative environment.
- Education: Bachelor’s or master’s degree in computer science or a related field.
Responsibilities:
- Design, implement, and maintain robust CI/CD pipelines using Azure DevOps for continuous integration and continuous delivery (CI/CD) of software applications.
- Provision and manage infrastructure resources on Microsoft Azure, including virtual machines, containers, storage, and networking components.
- Implement and manage Kubernetes clusters for containerized application deployments and orchestration.
- Configure and utilize Azure Container Registry (ACR) for secure container image storage and management.
- Automate infrastructure provisioning and configuration management using tools like Azure Resource Manager (ARM) templates.
- Monitor application performance and identify potential bottlenecks using Azure monitoring tools.
- Collaborate with developers and operations teams to identify and implement continuous improvement opportunities for the DevOps process.
- Troubleshoot and resolve DevOps-related issues, ensuring smooth and efficient software delivery.
- Stay up-to-date with the latest advancements in cloud technologies, DevOps tools, and best practices.
- Maintain a strong focus on security throughout the software delivery lifecycle.
- Participate in code reviews to identify potential infrastructure and deployment issues.
- Effectively communicate with technical and non-technical audiences on DevOps processes and initiatives.
Qualifications:
- Proven experience in designing and implementing CI/CD pipelines using Azure DevOps.
- In-depth knowledge of Microsoft Azure cloud platform services (IaaS, PaaS, SaaS).
- Expertise in deploying and managing containerized applications using Kubernetes.
- Experience with Infrastructure as Code (IaC) tools like ARM templates.
- Familiarity with Azure monitoring tools and troubleshooting techniques.
- A strong understanding of DevOps principles and methodologies (Agile, Lean).
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
- Strong written and verbal communication skills.
- A minimum of one relevant Microsoft certification (e.g., Azure Administrator Associate, DevOps Engineer Expert) is highly preferred.
at Datapure Technologies Pvt. Ltd.
"Preferred candidates who are in Indore"
Responsibilities:
● Design and develop advanced applications for the Android platform, ensuring high performance, responsiveness, and user-friendly interfaces.
● Collaborate with product managers, designers, and backend engineers to define project requirements and deliver innovative solutions.
● Implement and maintain backend services, APIs, and databases to support mobile applications.
● Conduct thorough testing and debugging to ensure application stability and performance optimization.
● Stay updated with the latest industry trends, technologies, and best practices to continuously improve development processes.
● Participate in code reviews, provide constructive feedback, and mentor junior team members when necessary.
Requirements:
● Bachelor's or Master's degree in Computer Science, Engineering, or related field.
● 2+years of professional experience as an Android developer, with expertise in Kotlin and Java programming languages.
● Strong understanding of Android SDK, different versions of Android, and familiarity with Material Design principles. ● Proficiency in frontend technologies such as XML, JSON, and third-party libraries.
● Experience with backend development using technologies like Node.js, Python, or PHP.
● Knowledge of database management systems (e.g., MySQL, MongoDB) and experience in designing schemas and optimizing queries.
● Excellent problem-solving skills, ability to work independently, and a passion for learning new technologies.
● Good communication skills and ability to collaborate effectively within a team environment.
Preferred Qualifications:
● Experience with cloud platforms (e.g., AWS, Google Cloud) and serverless architecture.
● Familiarity with version control systems (e.g., Git) and CI/CD pipelines.
● Previous experience in Agile/Scrum methodologies.
● Published apps on the Google Play Store or contributions to open-source projects
GCP Cloud Engineer:
- Proficiency in infrastructure as code (Terraform).
- Scripting and automation skills (e.g., Python, Shell). Knowing python is must.
- Collaborate with teams across the company (i.e., network, security, operations) to build complete cloud offerings.
- Design Disaster Recovery and backup strategies to meet application objectives.
- Working knowledge of Google Cloud
- Working knowledge of various tools, open-source technologies, and cloud services
- Experience working on Linux based infrastructure.
- Excellent problem-solving and troubleshooting skills
at Wissen Technology
Job Title: .NET Developer with Cloud Migration Experience
Job Description:
We are seeking a skilled .NET Developer with experience in C#, MVC, and ASP.NET to join our team. The ideal candidate will also have hands-on experience with cloud migration projects, particularly in migrating on-premise applications to cloud platforms such as Azure or AWS.
Responsibilities:
- Develop, test, and maintain .NET applications using C#, MVC, and ASP.NET
- Collaborate with cross-functional teams to define, design, and ship new features
- Participate in code reviews and ensure coding best practices are followed
- Work closely with the infrastructure team to migrate on-premise applications to the cloud
- Troubleshoot and debug issues that arise during migration and post-migration phases
- Stay updated with the latest trends and technologies in .NET development and cloud computing
Requirements:
- Bachelor's degree in Computer Science or related field
- X+ years of experience in .NET development using C#, MVC, and ASP.NET
- Hands-on experience with cloud migration projects, preferably with Azure or AWS
- Strong understanding of cloud computing concepts and principles
- Experience with database technologies such as SQL Server
- Excellent problem-solving and communication skills
Preferred Qualifications:
- Microsoft Azure or AWS certification
- Experience with other cloud platforms such as Google Cloud Platform (GCP)
- Familiarity with DevOps practices and tools
MUST HAVES:
- #java11, Java 17 & above only
- #springboot #microservices experience is must
- #cloud experience is must (AWS or GCP or Azure)
- Strong understanding of #functionalprogramming and #reactiveprogramming concepts.
- Experience with asynchronous programming and async frameworks/libraries.
- Proficiency in #sql databases (MySQL, PostgreSQL, etc.).
- WFO in NOIDA only.
Other requirements:
- Knowledge of socket programming and real-time communication protocols.
- Experience of building complex enterprise grade applications with multiple components and integrations
- Good coding practices and ability to design solutions
- Good communication skills
- Ability to mentor team and give technical guidance
- #fullstack skills with anyone of #javascript or #reactjs or #angularjs is preferable.
- Excellent problem-solving skills and attention to detail.
- Preferred experience with #nosql databases (MongoDB, Cassandra, Redis, etc.).
Join Our Journey
Jules develops an amazing end-to-end solution for recycled materials traders, importers and exporters. Which means a looooot of internal, structured data to play with in order to provide reporting, alerting and insights to end-users. With about 200 tables, covering all business processes from order management, to payments including logistics, hedging and claims, the wealth the data entered in Jules can unlock is massive.
After working on a simple stack made of PostGres, SQL queries and a visualization solution, the company is now ready to set-up its data stack and only misses you. We are thinking DBT, Redshift or Snowlake, Five Tran, Metabase or Luzmo etc. We also have an AI team already playing around text driven data interaction.
As a Data Engineer at Jules AI, your duties will involve both data engineering and product analytics, enhancing our data ecosystem. You will collaborate with cross-functional teams to design, develop, and sustain data pipelines, and conduct detailed analyses to generate actionable insights.
Roles And Responsibilities:
- Work with stakeholders to determine data needs, and design and build scalable data pipelines.
- Develop and sustain ELT processes to guarantee timely and precise data availability for analytical purposes.
- Construct and oversee large-scale data pipelines that collect data from various sources.
- Expand and refine our DBT setup for data transformation.
- Engage with our data platform team to address customer issues.
- Apply your advanced SQL and big data expertise to develop innovative data solutions.
- Enhance and debug existing data pipelines for improved performance and reliability.
- Generate and update dashboards and reports to share analytical results with stakeholders.
- Implement data quality controls and validation procedures to maintain data accuracy and integrity.
- Work with various teams to incorporate analytics into product development efforts.
- Use technologies like Snowflake, DBT, and Fivetran effectively.
Mandatory Qualifications:
- Hold a Bachelor's or Master's degree in Computer Science, Data Science, or a related field.
- Possess at least 4 years of experience in Data Engineering, ETL Building, database management, and Data Warehousing.
- Demonstrated expertise as an Analytics Engineer or in a similar role.
- Proficient in SQL, a scripting language (Python), and a data visualization tool.
- Mandatory experience in working with DBT.
- Experience in working with Airflow, and cloud platforms like AWS, GCP, or Snowflake.
- Deep knowledge of ETL/ELT patterns.
- Require at least 1 year of experience in building Data pipelines and leading data warehouse projects.
- Experienced in mentoring data professionals across all levels, from junior to senior.
- Proven track record in establishing new data engineering processes and navigating through ambiguity.
- Preferred Skills: Knowledge of Snowflake and reverse ETL tools is advantageous.
Grow, Develop, and Thrive With Us
- Global Collaboration: Work with a dynamic team that’s making an impact across the globe, in the recycling industry and beyond. We have customers in India, Singapore, United-States, Mexico, Germany, France and more
- Professional Growth: a highway toward setting-up a great data team and evolve into a leader
- Flexible Work Environment: Competitive compensation, performance-based rewards, health benefits, paid time off, and flexible working hours to support your well-being.
Apply to us directly : https://nyteco.keka.com/careers/jobdetails/41442
Company - Apptware Solutions
Location Baner Pune
Team Size - 130+
Job Description -
Cloud Engineer with 8+yrs of experience
Roles and Responsibilities
● Have 8+ years of strong experience in deployment, management and maintenance of large systems on-premise or cloud
● Experience maintaining and deploying highly-available, fault-tolerant systems at scale
● A drive towards automating repetitive tasks (e.g. scripting via Bash, Python, Ruby, etc)
● Practical experience with Docker containerization and clustering (Kubernetes/ECS)
● Expertise with AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda, VPN)
● Version control system experience (e.g. Git)
● Experience implementing CI/CD (e.g. Jenkins, TravisCI, CodePipeline)
● Operational (e.g. HA/Backups) NoSQL experience (e.g. MongoDB, Redis) SQL experience (e.g. MySQL)
● Experience with configuration management tools (e.g. Ansible, Chef) ● Experience with infrastructure-as-code (e.g. Terraform, Cloudformation)
● Bachelor's or master’s degree in CS, or equivalent practical experience
● Effective communication skills
● Hands-on cloud providers like MS Azure and GC
● A sense of ownership and ability to operate independently
● Experience with Jira and one or more Agile SDLC methodologies
● Nice to Have:
○ Sensu and Graphite
○ Ruby or Java
○ Python or Groovy
○ Java Performance Analysis
Role: Cloud Engineer
Industry Type: IT-Software, Software Services
Functional Area: IT Software - Application Programming, Maintenance Employment Type: Full Time, Permanent
Role Category: Programming & Design
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.
Role & Responsibilities:
Your role is focused on Design, Development and delivery of solutions involving:
• Data Integration, Processing & Governance
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Implement scalable architectural models for data processing and storage
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 5+ years of IT experience with 3+ years in Data related technologies
2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Cloud data specialty and other related Big data technology certifications
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
Data Scientist – Program Embedded
Job Description:
We are seeking a highly skilled and motivated senior data scientist to support a big data program. The successful candidate will play a pivotal role in supporting multiple projects in this program covering traditional tasks from revenue management, demand forecasting, improving customer experience to testing/using new tools/platforms such as Copilot Fabric for different purpose. The expected candidate would have deep expertise in machine learning methodology and applications. And he/she should have completed multiple large scale data science projects (full cycle from ideation to BAU). Beyond technical expertise, problem solving in complex set-up will be key to the success for this role. This is a data science role directly embedded into the program/projects, stake holder management and collaborations with patterner are crucial to the success on this role (on top of the deep expertise).
What we are looking for:
- Highly efficient in Python/Pyspark/R.
- Understand MLOps concepts, working experience in product industrialization (from Data Science point of view). Experience in building product for live deployment, and continuous development and continuous integration.
- Familiar with cloud platforms such as Azure, GCP, and the data management systems on such platform. Familiar with Databricks and product deployment on Databricks.
- Experience in ML projects involving techniques: Regression, Time Series, Clustering, Classification, Dimension Reduction, Anomaly detection with traditional ML approaches and DL approaches.
- Solid background in statistics, probability distributions, A/B testing validation, univariate/multivariate analysis, hypothesis test for different purpose, data augmentation etc.
- Familiar with designing testing framework for different modelling practice/projects based on business needs.
- Exposure to Gen AI tools and enthusiastic about experimenting and have new ideas on what can be done.
- If they have improved an internal company process using an AI tool, that would be great (e.g. process simplification, manual task automation, auto emails)
- Ideally, 10+ years of experience, and have been on independent business facing roles.
- CPG or retail as a data scientist would be nice, but not number one priority, especially for those who have navigated through multiple industries.
- Being proactive and collaborative would be essential.
Some projects examples within the program:
- Test new tools/platforms such as Copilo, Fabric for commercial reporting. Testing, validation and build trust.
- Building algorithms for predicting trend in category, consumptions to support dashboards.
- Revenue Growth Management, create/understand the algorithms behind the tools (can be built by 3rd parties) we need to maintain or choose to improve. Able to prioritize and build product roadmap. Able to design new solutions and articulate/quantify the limitation of the solutions.
- Demand forecasting, create localized forecasts to improve in store availability. Proper model monitoring for early detection of potential issues in the forecast focusing particularly on improving the end user experience.
Data Scientist – Delivery & New Frontiers Manager
Job Description:
We are seeking highly skilled and motivated data scientist to join our Data Science team. The successful candidate will play a pivotal role in our data-driven initiatives and be responsible for designing, developing, and deploying data science solutions that drives business values for stakeholders. This role involves mapping business problems to a formal data science solution, working with wide range of structured and unstructured data, architecture design, creating sophisticated models, setting up operations for the data science product with the support from MLOps team and facilitating business workshops. In a nutshell, this person will represent data science and provide expertise in the full project cycle. Expectation of the successful candidate will be above that of a typical data scientist. Beyond technical expertise, problem solving in complex set-up will be key to the success for this role.
Responsibilities:
- Collaborate with cross-functional teams, including software engineers, product managers, and business stakeholders, to understand business needs and identify data science opportunities.
- Map complex business problems to data science problem, design data science solution using GCP/Azure Databricks platform.
- Collect, clean, and preprocess large datasets from various internal and external sources.
- Streamlining data science process working with Data Engineering, and Technology teams.
- Managing multiple analytics projects within a Function to deliver end-to-end data science solutions, creation of insights and identify patterns.
- Develop and maintain data pipelines and infrastructure to support the data science projects
- Communicate findings and recommendations to stakeholders through data visualizations and presentations.
- Stay up to date with the latest data science trends and technologies, specifically for GCP companies
Education / Certifications:
Bachelor’s or Master’s in Computer Science, Engineering, Computational Statistics, Mathematics.
Job specific requirements:
- Brings 5+ years of deep data science experience
∙ Strong knowledge of machine learning and statistical modeling techniques in a in a clouds-based environment such as GCP, Azure, Amazon
- Experience with programming languages such as Python, R, Spark
- Experience with data visualization tools such as Tableau, Power BI, and D3.js
- Strong understanding of data structures, algorithms, and software design principles
- Experience with GCP platforms and services such as Big Query, Cloud ML Engine, and Cloud Storage
- Experience in configuring and setting up the version control on Code, Data, and Machine Learning Models using GitHub.
- Self-driven, be able to work with cross-functional teams in a fast-paced environment, adaptability to the changing business needs.
- Strong analytical and problem-solving skills
- Excellent verbal and written communication skills
- Working knowledge with application architecture, data security and compliance team.
Responsibilities:
Develop and maintain high-quality, scalable, and efficient Java codebase for our ad-serving platform.
Collaborate with cross-functional teams including product managers, designers, and other developers to
understand requirements and translate them into technical solutions.
Design and implement new features and functionalities in the ad-serving system, focusing on performance
optimization and reliability.
Troubleshoot and debug complex issues in the ad server environment, providing timely resolutions to ensure
uninterrupted service.
Conduct code reviews, provide constructive feedback, and enforce coding best practices to maintain code quality
and consistency across the platform.
Stay updated with emerging technologies and industry trends in ad serving and digital advertising, and integrate
relevant innovations into our platform.
Work closely with DevOps and infrastructure teams to deploy and maintain the ad-serving platform in a cloud- based environment.
Collaborate with stakeholders to gather requirements, define technical specifications, and estimate development
efforts for new projects and features.
Mentor junior developers, sharing knowledge and best practices to foster a culture of continuous learning and
improvement within the development team.
Participate in on-call rotations and provide support for production issues as needed, ensuring maximum uptime
and reliability of the ad-serving platform.
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.
Role & Responsibilities:
Job Title: Senior Associate L1 – Data Engineering
Your role is focused on Design, Development and delivery of solutions involving:
• Data Ingestion, Integration and Transformation
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies
2.Minimum 1.5 years of experience in Big Data technologies
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
7.Cloud data specialty and other related Big data technology certifications
Job Title: Senior Associate L1 – Data Engineering
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes