Cutshort logo
Google cloud platform gcp jobs

50+ Google Cloud Platform (GCP) Jobs in India

Apply to 50+ Google Cloud Platform (GCP) Jobs on CutShort.io. Find your next job, effortlessly. Browse Google Cloud Platform (GCP) Jobs and apply today!

icon
Koolioai
Swarna M
Posted by Swarna M
Remote, Chennai
5 - 7 yrs
₹20L - ₹30L / yr
skill iconPython
skill iconReact.js
skill iconFlask
Google Cloud Platform (GCP)

About koolio.ai

Website: www.koolio.ai

koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond—easily. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.

About the Full-Time Position

We are seeking experienced Full Stack Developers to join our innovative team on a full-time, hybrid basis. As part of koolio.ai, you will work on a next-gen AI-powered platform, shaping the future of audio content creation. You’ll collaborate with cross-functional teams to deliver scalable, high-performance web applications, handling client- and server-side development. This role offers a unique opportunity to contribute to a rapidly growing platform with a global reach and thrive in a fast-moving, self-learning startup environment where adaptability and innovation are key.

Key Responsibilities:

  • Collaborate with teams to implement new features, improve current systems, and troubleshoot issues as we scale
  • Design and build efficient, secure, and modular client-side and server-side architecture
  • Develop high-performance web applications with reusable and maintainable code
  • Work with audio/video processing libraries for JavaScript to enhance multimedia content creation
  • Integrate RESTful APIs with Google Cloud Services to build robust cloud-based applications
  • Develop and optimize Cloud Functions to meet specific project requirements and enhance overall platform performance

Requirements and Skills:

  • Education: Degree in Computer Science or a related field
  • Work Experience: Minimum of 6+ years of proven experience as a Full Stack Developer or similar role, with demonstrable expertise in building web applications at scale
  • Technical Skills:
  • Proficiency in front-end languages such as HTML, CSS, JavaScript, jQuery, and ReactJS
  • Strong experience with server-side technologies, particularly REST APIs, Python, Google Cloud Functions, and Google Cloud services
  • Familiarity with NoSQL and PostgreSQL databases
  • Experience working with audio/video processing libraries is a strong plus
  • Soft Skills:
  • Strong problem-solving skills and the ability to think critically about issues and solutions
  • Excellent collaboration and communication skills, with the ability to work effectively in a remote, diverse, and distributed team environment
  • Proactive, self-motivated, and able to work independently, balancing multiple tasks with minimal supervision
  • Keen attention to detail and a passion for delivering high-quality, scalable solutions
  • Other Skills: Familiarity with GitHub, CI/CD pipelines, and best practices in version control and continuous deployment

Compensation and Benefits:

  • Total Yearly Compensation: ₹25 LPA based on skills and experience
  • Health Insurance: Comprehensive health coverage provided by the company
  • ESOPs: An opportunity for wealth creation and to grow alongside a fantastic team

Why Join Us?

  • Be a part of a passionate and visionary team at the forefront of audio content creation
  • Work on an exciting, evolving product that is reshaping the way audio content is created and consumed
  • Thrive in a fast-moving, self-learning startup environment that values innovation, adaptability, and continuous improvement
  • Enjoy the flexibility of a full-time hybrid position with opportunities to grow professionally and expand your skills
  • Collaborate with talented professionals from around the world, contributing to a product that has a real-world impact


Read more
Koolioai
Swarna M
Posted by Swarna M
Chennai
0 - 1 yrs
₹4.5L - ₹6.5L / yr
skill iconPython
skill iconFlask
Google Cloud Platform (GCP)

About koolio.ai

Website: www.koolio.ai

koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond—easily. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.

About the Full-Time Position

We are seeking a Junior Backend Engineer (Fresher) to join our team on a full-time, hybrid basis. This role is ideal for motivated recent graduates eager to gain hands-on experience in backend development. Working closely with senior engineers, you’ll play a vital role in enhancing koolio.ai’s backend infrastructure, focusing on building, maintaining, and optimizing server-side applications and APIs. This is a unique opportunity to grow within an innovative environment that values creativity and technical excellence.

Key Responsibilities:

  • Assist in developing and maintaining backend services and APIs, ensuring stability, scalability, and performance
  • Collaborate with the frontend and QA teams to ensure seamless integration of backend systems with user-facing features
  • Help troubleshoot and resolve backend issues, working alongside senior developers to enhance system reliability
  • Participate in code reviews, learning best practices and contributing to maintaining high code quality standards
  • Support in monitoring and improving backend performance metrics
  • Document processes, code, and system architecture under guidance to ensure comprehensive, up-to-date technical resources


Requirements and Skills:

  • Education: Degree in Computer Science or a related field
  • Work Experience: No prior work experience required; internships or academic projects related to backend development are a plus
  • Technical Skills:
  • Basic understanding of backend systems and APIs
  • Working experience in Python
  • Familiarity with basic version control systems like Git
  • Knowledge of databases (e.g., MySQL, MongoDB) and basic proficiency in SQL
  • Understanding of cloud platforms or services like Google Cloud is an advantage
  • Soft Skills:
  • Eagerness to learn and apply new technologies in a fast-paced environment
  • Strong analytical and problem-solving skills
  • Excellent attention to detail and a proactive mindset
  • Ability to communicate effectively and work in a collaborative, remote team
  • Other Skills
  • Familiarity with basic DevOps practices or interest in learning is beneficial

Compensation and Benefits:

  • Total Yearly Compensation: ₹4.5-6 LPA based on skills and experience
  • Health Insurance: Comprehensive health coverage provided by the company

Why Join Us?

  • Be a part of a passionate and visionary team at the forefront of audio content creation
  • Work on an exciting, evolving product that is reshaping the way audio content is created and consumed
  • Thrive in a fast-moving, self-learning startup environment that values innovation, adaptability, and continuous improvement
  • Enjoy the flexibility of a full-time hybrid position with opportunities to grow professionally and expand your skills
  • Collaborate with talented professionals from around the world, contributing to a product that has a real-world impact


Read more
HighLevel Inc.

at HighLevel Inc.

1 video
31 recruiters
Rini  Shah
Posted by Rini Shah
Remote only
4 - 9 yrs
Best in industry
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconMongoDB
+6 more

About HighLevel:  

HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have 1000+ employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.


Our Website - https://www.gohighlevel.com/

YouTube Channel - https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g

Blog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/


Our Customers:

HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 450K million businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.


Scale at HighLevel:

Work with scale, our infrastructure handles around 30 Billion+ API hits, 20 Billion+ message events, and more than 200 TeraBytes of data


About the role:

Seeking a seasoned Full Stack Developer with hands-on experience in Node.js and Vue.js (or React/Angular). You will be instrumental in building cutting-edge, AI-powered products along with mentoring or leading a team of engineers.


Team-Specific Focus Areas:

Conversations AI:

-Develop AI solutions for appointment booking, forms filling, sales, and intent recognition

-Ensure seamless integration and interaction with users through natural language processing and understanding

Workflows AI:

-Create and optimize AI-powered workflows to automate and streamline business processes

Voice AI:

-Focus on VOIP technology with an emphasis on low latency and high-quality voice interactions

-Fine-tune voice models for clarity, accuracy, and naturalness in various applications

Support AI:

-Integrate AI solutions with FreshDesk and ClickUp to enhance customer support and ticketing systems

-Develop tools for automated response generation, issue resolution, and workflow management

Platform AI:

-Oversee AI training, billing, content generation, funnels, image processing, and model evaluations

-Ensure scalable and efficient AI models that meet diverse platform needs and user demands


Responsibilities:

  • REST APIs - Understanding REST philosophy. Writing secure, reusable, testable, and efficient APIs.
  • Database - Designing collection schemas, and writing efficient queries
  • Frontend - Developing user-facing features and integration with REST APIs
  • UI/UX - Being consistent with the design principles and reusing components wherever possible
  • Communication - With other team members, product team, and support team


Requirements:

  • Expertise with large scale Conversation Agents along with Response Evaluations
  • Good hands-on experience with Node.Js and Vue.js (or React/Angular)
  • Experience of working with production-grade applications which a decent usage
  • Bachelor's degree or equivalent experience in Engineering or related field of study
  • 5+ years of engineering experience
  • Expertise with MongoDB
  • Proficient understanding of code versioning tools, such as Git
  • Strong communication and problem-solving skills


EEO Statement:

At HighLevel, we value diversity. In fact, we understand it makes our organisation stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.

Read more
Incubyte

at Incubyte

4 recruiters
Lifi Lawrance
Posted by Lifi Lawrance
Remote only
7 - 8 yrs
₹7L - ₹40L / yr
Data Warehouse (DWH)
Informatica
ETL
SQL
skill iconPython
+4 more

Who We Are 🌟

 

We are a company where the ‘HOW’ of building software is just as important as the ‘WHAT’. Embracing Software Craftsmanship values and eXtreme Programming Practices, we create well-crafted products for our clients. We partner with large organizations to help modernize their legacy code bases and work with startups to launch MVPs, scale or as extensions of their team to efficiently operationalize their ideas. We love to work with folks who are passionate about creating exceptional software, are continuous learners, and are painstakingly fussy about quality. 

 

Our Values 💡

 

Relentless Pursuit of Quality with Pragmatism

Extreme Ownership

Proactive Collaboration

Active Pursuit of Mastery

Effective Feedback

Client Success 

 

What We’re Looking For 👀

 

We’re looking to hire software craftspeople and data engineers. People who are proud of the way they work and the code they write. People who believe in and are evangelists of extreme programming principles. High quality, motivated and passionate people who make great teams. We heavily believe in being a DevOps organization, where developers own the entire release cycle including infrastructure technologies in the cloud

 

 

What You’ll Be Doing 💻

 

Collaborate with teams across the organization, including product managers, data engineers and business leaders to translate requirements into software solutions to process large amounts of data.

  • Develop new ways to ensure ETL and data processes are running efficiently.
  • Write clean, maintainable, and reusable code that adheres to best practices and coding standards.
  • Conduct thorough code reviews and provide constructive feedback to ensure high-quality codebase.
  • Optimize software performance and ensure scalability and reliability.
  • Stay up-to-date with the latest trends and advancements in data processing and ETL development and apply them to enhance our products.
  • Meet with product owners and other stakeholders weekly to discuss priorities and project requirements.
  • Ensure deployment of new code is tested thoroughly and has business sign off from stakeholders as well as senior leadership.
  • Handle all incoming support requests and errors in a timely manner and within the necessary time frame and timezone commitments to the business.

 

Location : Remote

 

Skills you need in order to succeed in this role

 

What you will bring:

 

  • 7+ years of experience with Java 11+(required), managing and working in Maven projects
  • 2+ years of experience with Python (required)
  • Knowledge and understanding of complex data pipelines utilizing ETL processes (required)
  • 4+ years of experience using relational databases and deep knowledge of SQL with the ability to understand complex data relationships and transformations (required)
  • Knowledge and understanding of Git (required)
  • 3+ year of experience with various GCP technologies
  • Google Dataflow (Apache Beam SDK) (equivalent Hadoop technologies)
  • BigQuery (equivalent of any data warehouse technologies: Snowflake, Azure DW, Redshift)
  • Cloud Storage Buckets (equivalent to S3)
  • GCloud CLI
  • Experience with Apache Airflow / Google Composer
  • Knowledge and understanding of Docker, Linux, Shell/Bash and virtualization technologies
  • Knowledge and understanding of CI/CD methodologies
  • Ability to understand and build UML diagrams to showcase complex logic
  • Experience with various organization/code tools such as Jira, Confluence and GitHub

 

Bonus Points for Tech Enthusiasts:

  • Infrastructure as Code technologies (Pulumi, Terraform, CloudFormation)
  • Experience with observability and logging platforms (DataDog)
  • Experience with DBT or similar technologies

Read more
Hyderabad
1 - 3 yrs
₹3L - ₹10L / yr
Microsoft Windows Azure
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Office 365

Key Responsibilities:

  • Azure Cloud Sales & Solutioning: Lead Microsoft Azure cloud sales efforts across global regions, delivering solutions for applications, databases, and SAP servers based on customer requirements.
  • Customer Engagement: Act as a trusted advisor for customers, leading them through their cloud transformation by understanding their requirements and recommending suitable cloud solutions.
  • Lead Generation & Cost Optimization: Generate leads independently, provide cost-optimized Azure solutions, and continuously work to maximize value for clients.
  • Sales Certifications: Hold basic Microsoft sales certifications (Foundation & Business Professional).
  • Project Management: Oversee and manage Azure cloud projects, including setting up timelines, guiding technical teams, and communicating progress to customers. Ensure the successful completion of project objectives.
  • Cloud Infrastructure Expertise: Maintain a deep understanding of Azure cloud infrastructure and services, including migrations, disaster recovery (DR), and cloud budgeting.
  • Billing Management: Manage Azure billing processes, including subscription-based invoicing, purchase orders, renewals, license billing, and tracking expiration dates.
  • Microsoft License Sales: Expert in selling Microsoft licenses such as SQL, Windows, and Office 365.
  • Client Collaboration: Schedule meetings with internal teams and clients to align on project requirements and ensure effective communication.
  • Customer Management: Track leads, follow up on calls, and ensure customer satisfaction by resolving issues and optimizing cloud resources. Provide regular updates on Microsoft technologies and programs.
  • Field Sales: Participate in presales meetings and client visits to gather insights and propose cloud solutions.
  • Internal Collaboration: Work closely with various internal departments to achieve project results and meet client expectations.


Qualifications:

  • 1-3+ years of experience selling or consulting with corporate/public sector/ enterprise customers on Microsoft Azure cloud.
  • Proficient in Azure cost optimization, cloud infrastructure, and sales of cloud solutions to end customers.
  • Experience in generating leads and tracking sales progress.
  • Project management experience with strong organizational skills.
  • Ability to work collaboratively with internal teams and customers.
  • Strong communication and problem-solving skills.



  • SHIFT: DAY SHIFT
  • WORKING DAYS: MON-SAT
  • LOCATION: HYDERABAD
  • WORK MODEL: WORK FROM THE OFFICE


REQUIRED QUALIFICATIONS:


  • A degree in Computer Science or equivalent - Graduation


BENEFITS FROM THE COMPANY: 


  • High chance of Career Growth.
  • Flexible working hours and the best infrastructure.
  • Passionate Team Members surround you.


Read more
 a leading data & analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage

a leading data & analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage

Agency job
via HyrHub by Neha Koshy
Bengaluru (Bangalore)
4 - 6 yrs
₹10L - ₹15L / yr
skill iconReact.js
skill iconJavascript
skill iconRedux/Flux
skill iconHTML/CSS
skill iconAmazon Web Services (AWS)
+2 more

Roles & Responsibilities:

  • Develop and maintain mobile-responsive web applications using React.
  • Collaborate with our UI/UX designers to translate design wireframes into responsive web applications.
  • Ensure web applications function flawlessly on various web browsers and platforms.
  • Implement performance optimizations to enhance the mobile user experience.
  • Proven experience as a Mobile Responsive Web Developer or a similar role is a must.
  • Knowledge of web performance optimization and browser compatibility.
  • Excellent problem-solving skills and attention to detail.


What are we looking for?

  • 4+ years’ experience as a Front-End developer with hands on experience in React.js & Redux
  • Experience as a UI/UX designer.
  • Familiar with cloud infrastructure (Azure, AWS, or Google Cloud Services).
  • Expert knowledge of CSS, CSS extension languages (Less, Sass), and CSS preprocessor tools.
  • Expert knowledge of HTML5 and its best practices.
  • Proficiency in designing interfaces and building clickable prototypes.
  • Experience with Test Driven Development and Acceptance Test Driven Development.
  • Proficiency using version control tools
  • Effective communication and teamwork skills.
Read more
Smartan.ai

at Smartan.ai

2 candid answers
Aadharsh M
Posted by Aadharsh M
Chennai
4 - 8 yrs
₹5L - ₹15L / yr
skill iconPython
NumPy
TensorFlow
PyTorch
Google Cloud Platform (GCP)
+4 more

Role Overview:

We are seeking a highly skilled and motivated Data Scientist to join our growing team. The ideal candidate will be responsible for developing and deploying machine learning models from scratch to production level, focusing on building robust data-driven products. You will work closely with software engineers, product managers, and other stakeholders to ensure our AI-driven solutions meet the needs of our users and align with the company's strategic goals.


Key Responsibilities:

  • Develop, implement, and optimize machine learning models and algorithms to support product development.
  • Work on the end-to-end lifecycle of data science projects, including data collection, preprocessing, model training, evaluation, and deployment.
  • Collaborate with cross-functional teams to define data requirements and product taxonomy.
  • Design and build scalable data pipelines and systems to support real-time data processing and analysis.
  • Ensure the accuracy and quality of data used for modeling and analytics.
  • Monitor and evaluate the performance of deployed models, making necessary adjustments to maintain optimal results.
  • Implement best practices for data governance, privacy, and security.
  • Document processes, methodologies, and technical solutions to maintain transparency and reproducibility.


Qualifications:

  • Bachelor's or Master's degree in Data Science, Computer Science, Engineering, or a related field.
  • 5+ years of experience in data science, machine learning, or a related field, with a track record of developing and deploying products from scratch to production.
  • Strong programming skills in Python and experience with data analysis and machine learning libraries (e.g., Pandas, NumPy, TensorFlow, PyTorch).
  • Experience with cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker).
  • Proficiency in building and optimizing data pipelines, ETL processes, and data storage solutions.
  • Hands-on experience with data visualization tools and techniques.
  • Strong understanding of statistics, data analysis, and machine learning concepts.
  • Excellent problem-solving skills and attention to detail.
  • Ability to work collaboratively in a fast-paced, dynamic environment.


Preferred Qualifications:

  • Knowledge of microservices architecture and RESTful APIs.
  • Familiarity with Agile development methodologies.
  • Experience in building taxonomy for data products.
  • Strong communication skills and the ability to explain complex technical concepts to non-technical stakeholders.
Read more
Lean Technologies

at Lean Technologies

2 candid answers
1 product
Reshika Mendiratta
Posted by Reshika Mendiratta
Pune
10yrs+
Upto ₹60L / yr (Varies
)
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+8 more

About Lean Technologies

Lean is on a mission to revolutionize the fintech industry by providing developers with a universal API to access their customers' financial accounts across the Middle East. We’re breaking down infrastructure barriers and empowering the growth of the fintech industry. With Sequoia leading our $33 million Series A round, Lean is poised to expand its coverage across the region while continuing to deliver unparalleled value to developers and stakeholders.

Join us and be part of a journey to enable the next generation of financial innovation. We offer competitive salaries, private healthcare, flexible office hours, and meaningful equity stakes to ensure long-term alignment. At Lean, you'll work on solving complex problems, build a lasting legacy, and be part of a diverse, inclusive, and equal opportunity workplace.


About the role:

Are you a highly motivated and experienced software engineer looking to take your career to the next level? Our team at Lean is seeking a talented engineer to help us build the distributed systems that allow our engineering teams to deploy our platform in multiple geographies across various deployment solutions. You will work closely with functional heads across software, QA, and product teams to deliver scalable and customizable release pipelines.


Responsibilities

  • Distributed systems architecture – understand and manage the most complex systems
  • Continual reliability and performance optimization – enhancing observability stack to improve proactive detection and resolution of issues
  • Employing cutting-edge methods and technologies, continually refining existing tools to enhance performance and drive advancements
  • Problem-solving capabilities – troubleshooting complex issues and proactively reducing toil through automation
  • Experience in technical leadership and setting technical direction for engineering projects
  • Collaboration skills – working across teams to drive change and provide guidance
  • Technical expertise – depth skills and ability to act as subject matter expert in one or more of: IAAC, observability, coding, reliability, debugging, system design
  • Capacity planning – effectively forecasting demand and reacting to changes
  • Analyze and improve efficiency, scalability, and stability of various system resources
  • Incident response – rapidly detecting and resolving critical incidents. Minimizing customer impact through effective collaboration, escalation (including periodic on-call shifts) and postmortems


Requirements

  • 10+ years of experience in Systems Engineering, DevOps, or SRE roles running large-scale infrastructure, cloud, or web services
  • Strong background in Linux/Unix Administration and networking concepts
  • We work on OCI but would accept candidates with solid GCP/AWS or other cloud providers’ knowledge and experience
  • 3+ years of experience with managing Kubernetes clusters, Helm, Docker
  • Experience in operating CI/CD pipelines that build and deliver services on the cloud and on-premise
  • Work with CI/CD tools/services like Jenkins/GitHub-Actions/ArgoCD etc.
  • Experience with configuration management tools either Ansible, Chef, Puppet, or equivalent
  • Infrastructure as Code - Terraform
  • Experience in production environments with both relational and NoSQL databases
  • Coding with one or more of the following: Java, Python, and/or Go


Bonus

  • MultiCloud or Hybrid Cloud experience
  • OCI and GCP


Why Join Us?

At Lean, we value talent, drive, and entrepreneurial spirit. We are constantly on the lookout for individuals who identify with our mission and values, even if they don’t meet every requirement. If you're passionate about solving hard problems and building a legacy, Lean is the right place for you. We are committed to equal employment opportunities regardless of race, color, ancestry, religion, gender, sexual orientation, or disability.

Read more
Remote only
5 - 7 yrs
₹9.6L - ₹12L / yr
Systems design
skill iconPython
Google Cloud Platform (GCP)

We are seeking a Cloud Architect for a Geocode Service Center Modernization Assessment and Implementation project. The primary objectives of the project are to migrate the legacy Geocode Service Center to a cloud-based solution. Initial efforts will be leading Assessments and Design efforts and ultimately, implementation of approved design.


Responsibilties:

  • System Design and Architecture: Design and develop scalable, cloud-based geocoding systems that meet business requirements.
  • Integration: Integrate geocoding services with existing cloud infrastructure and applications.
  • Performance Optimization: Optimize system performance, ensuring high availability, reliability, and efficiency.
  • Security: Implement robust security measures to protect geospatial data and ensure compliance with industry standards.
  • Collaboration: Work closely with data scientists, developers, and other stakeholders to understand requirements and deliver solutions.
  • Innovation: Stay updated with the latest trends and technologies in cloud computing and geospatial analysis to drive innovation.
  • Documentation: Create and maintain comprehensive documentation for system architecture, processes, and configurations.​


Requirements:

  • Educational Background: Bachelor’s or Master’s degree in Computer Science, Information Technology, Geography, or a related field.
  • Technical Proficiency: Extensive experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and geocoding tools like Precisely, ESRI etc.
  • Programming Skills: Proficiency in programming languages such as Python, Java, or C#.
  • Analytical Skills: Strong analytical and problem-solving skills to design efficient geocoding systems.
  • Experience: Proven experience in designing and implementing cloud-based solutions, preferably with a focus on geospatial data.
  • Communication Skills: Excellent communication and collaboration skills to work effectively with cross-functional teams.
  • Certifications: Relevant certifications in cloud computing (e.g., AWS Certified Solutions Architect) and geospatial technologies are a plus.​


Benefits:

  • Work Location: Remote
  • 5 days working​



You can apply directly through the link:https://zrec.in/il0hc?source=CareerSite

Explore our Career Page for more such jobs : careers.infraveo.com

Read more
a leading Data & Analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage. We are the partner of choice for enterprises on their digital transformation journey. Our teams offer solutions and services at the intersection of Advanced Data, Analytics, and AI.

a leading Data & Analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage. We are the partner of choice for enterprises on their digital transformation journey. Our teams offer solutions and services at the intersection of Advanced Data, Analytics, and AI.

Agency job
via HyrHub by Shwetha Naik
Bengaluru (Bangalore), Mangalore
8 - 12 yrs
₹20L - ₹30L / yr
skill iconPython
FastAPI
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

Skills: Python, Fast API, AWS/GCP/Azure'

Location - Bangalore / Mangalore (Hybrid)

NP - Immediate - 20 days


• Experience in building python-based utility-scale Enterprise APIs with QoS/SLA based specs building upon Cloud APIs from GCP, AWS and Azure

• Exposure to Multi-modal (Text, Audio, Video) development in synchronous and Batch mode in high-volume use-cases leveraging Queuing, Pooling and enterprise scaling patterns. • Solid understanding of API life cycle including versioning (ex: parallel deployment of multiple versions), exception management.

• Working experience (Development and/or troubleshooting skills) of Enterprise scale AWS, CI/CD leveraging GitHub actions-based workflow.

• Solid knowledge of developing/updating enterprise Cloud formation templates for python centric code-assets along with quality/security tooling

• Design/support tracing/monitoring capability (X-Ray, AWS distro for Open Telemetry) for Fargate services. • Responsible and able to communicate requirements.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Sukanya Mohan
Posted by Sukanya Mohan
Chennai
4 - 10 yrs
Best in industry
skill iconKubernetes
skill iconDocker
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Wissen Technology is hiring for Devops engineer


Required:


-4 to 10 years of relevant experience in Devops

-Must have hands on experience on AWS, Kubernetes, CI/CD pipeline

-Good to have exposure on Github or Gitlab

-Open to work from hashtag Chennai

-Work mode will be Hybrid


Company profile:


Company Name : Wissen Technology

Group of companies in India : Wissen Technology & Wissen Infotech

Work Location - Chennai

Website : www.wissen.com

Wissen Thought leadership : https://lnkd.in/gvH6VBaU

LinkedIn: https://lnkd.in/gnK-vXjF

Read more
Someshwara Software
Bengaluru (Bangalore)
2 - 3 yrs
₹4L - ₹7L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more

Position Overview: We are seeking a talented and experienced Cloud Engineer specialized in AWS cloud services to join our dynamic team. The ideal candidate will have a strong background in AWS infrastructure and services, including EC2, Elastic Load Balancing (ELB), Auto Scaling, S3, VPC, RDS, CloudFormation, CloudFront, Route 53, AWS Certificate Manager (ACM), and Terraform for Infrastructure as Code (IaC). Experience with other AWS services is a plus.


Responsibilities:

• Design, deploy, and maintain AWS infrastructure solutions, ensuring scalability, reliability, and security.

• Configure and manage EC2 instances to meet application requirements.

• Implement and manage Elastic Load Balancers (ELB) to distribute incoming traffic across multiple instances.

• Set up and manage AWS Auto Scaling to dynamically adjust resources based on demand.

• Configure and maintain VPCs, including subnets, route tables, and security groups, to control network traffic.

• Deploy and manage AWS CloudFormation and Terraform templates to automate infrastructure provisioning using Infrastructure as Code (IaC) principles.

• Implement and monitor S3 storage solutions for secure and scalable data storage

• Set up and manage CloudFront distributions for content delivery with low latency and high transfer speeds.

• Configure Route 53 for domain management, DNS routing, and failover configurations.

• Manage AWS Certificate Manager (ACM) for provisioning, managing, and deploying SSL/TLS certificates.

• Collaborate with cross-functional teams to understand business requirements and provide effective cloud solutions.

• Stay updated with the latest AWS technologies and best practices to drive continuous improvement.



Qualifications:

• Bachelor's degree in computer science, Information Technology, or a related field.

• Minimum of 2 years of relevant experience in designing, deploying, and managing AWS cloud solutions.

• Strong proficiency in AWS services such as EC2, ELB, Auto Scaling, VPC, S3, RDS, and CloudFormation.

• Experience with other AWS services such as Lambda, ECS, EKS, and DynamoDB is a plus.

• Solid understanding of cloud computing principles, including IaaS, PaaS, and SaaS.

• Excellent problem-solving skills and the ability to troubleshoot complex issues in a cloud environment.

• Strong communication skills with the ability to collaborate effectively with cross-functional teams.

• Relevant AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer, etc.) are highly desirable.


Additional Information:

• We value creativity, innovation, and a proactive approach to problem-solving.

• We offer a collaborative and supportive work environment where your ideas and contributions are valued.

• Opportunities for professional growth and development. Someshwara Software Pvt Ltd is an equal opportunity employer.


We celebrate diversity and are dedicated to creating an inclusive environment for all employees.

 

Read more
Celeris Pay
Celeris Pay
Posted by Celeris Pay
Delhi
2 - 5 yrs
₹5L - ₹12L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Job Description

We are seeking a talented DevOps Engineer to join our dynamic team. The ideal candidate will have a passion for building and maintaining cloud infrastructure while ensuring the reliability and efficiency of our applications. You will be responsible for deploying and maintaining cloud environments, enhancing CI/CD pipelines, and ensuring optimal performance through proactive monitoring and troubleshooting.


Roles and Responsibilities:

  • Cloud Infrastructure: Deploy and maintain cloud infrastructure on Microsoft Azure or AWS, ensuring scalability and reliability.
  • CI/CD Pipeline Enhancement: Continuously improve CI/CD pipelines and build robust development and production environments.
  • Application Deployment: Manage application deployments, ensuring high reliability and minimal downtime.
  • Monitoring: Monitor infrastructure health and perform application log analysis to identify and resolve issues proactively.
  • Incident Management: Troubleshoot and debug incidents, collaborating closely with development teams to implement effective solutions.
  • Infrastructure as Code: Enhance Ansible roles and Terraform modules, maintaining best practices for Infrastructure as Code (IaC).
  • Tool Development: Write tools and utilities to streamline and improve infrastructure operations.
  • SDLC Practices: Establish and uphold industry-standard Software Development Life Cycle (SDLC) practices with a strong focus on quality.
  • On-call Support: Be available 24/7 for on-call incident management for production environments.


Requirements:

  • Cloud Experience: Hands-on experience deploying and provisioning virtual machines on Microsoft Azure or Amazon AWS.
  • Linux Administration: Proficient with Linux systems and basic system administration tasks.
  • Networking Knowledge: Working knowledge of network fundamentals (Ethernet, TCP/IP, WAF, DNS, etc.).
  • Scripting Skills: Proficient in BASH and at least one high-level scripting language (Python, Ruby, Perl).
  • Tools Proficiency: Familiarity with tools such as Git, Nagios, Snort, and OpenVPN.
  • Containerization: Strong experience with Docker and Kubernetes is mandatory.
  • Communication Skills: Excellent interpersonal communication skills, with the ability to engage with peers, customers, vendors, and partners across all levels of the organization.



Read more
HighLevel Inc.

at HighLevel Inc.

1 video
31 recruiters
Rini  Shah
Posted by Rini Shah
Remote only
4 - 8 yrs
Best in industry
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
recommendation algorithm
MLOps
+5 more

Who We Are:

HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have 1000+ employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.


Our Website - https://www.gohighlevel.com/

YouTube Channel- https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g

Blog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/


Our Customers:

HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 450K million businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.


Scale at HighLevel:

We operate at scale, managing over 40 billion API hits and 120 billion events monthly, with more than 500 micro-services in production. Our systems handle 200+ terabytes of application data and 6 petabytes of storage




About the role

We are looking for a senior AI engineer for the platform team. The ideal candidate will be responsible for deriving key insights from huge sets of data, building models around those, and taking them to production.

● Implementation

- Analyze data to gain new insights, develop custom data models and algorithms to apply to distributed data sets at scale

- Build continuous integration, test-driven development, and production deployment frameworks

● Architecture- Design the architecture with the Data and DevOps engineers

● Ownership- Take ownership of the accuracy of the models and coordinate with the stakeholders to keep moving forward

● Releases- Take ownership of the releases and pre-plan according to the needs

● Quality- Maintain high standards of code quality through regular design and code reviews.


Qualifications

  • Extensive hands-on experience in Python/R is a must.
  • 3 years of AI engineering experience
  • Proficiency in ML/DL frameworks and tools (e. g. Pandas, Numpy, Scikit-learn, Pytorch, Lightning, Huggingface)
  • Strong command in low-level operations involved in building architectures for Ensemble Models, NLP, CV (eg. XGB, Transformers, CNNs, Diffusion Models)
  • Experience with end-to-end system design; data analysis, distributed training, model optimisation, and evaluation systems, for large-scale training & prediction.
  • Experience working with huge datasets (ideally in TeraBytes) would be a plus.
  • Experience with frameworks like Apache Spark (using MLlib or PySpark), Dask etc.
  • Practical experience in API development (e. g., Flask, FastAPI .
  • Experience with MLOps principles (scalable development & deployment of complex data science workflows) and associated tools, e. g. MLflow, Kubeflow, ONNX
  • Bachelor's degree or equivalent experience in Engineering or a related field of study
  • Strong people, communication, and problem-solving skills


What to Expect when you Apply

● Exploratory Call

● Technical Round I/II

● Assignment

● Cultural Fitment Round


EEO Statement:

At HighLevel, we value diversity. In fact, we understand it makes our organization stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.

Read more
NASDAQ listed, Service Provider IT Company

NASDAQ listed, Service Provider IT Company

Agency job
via CaptiveAide Advisory Pvt Ltd by Abhishek Dhuria
Bengaluru (Bangalore), Hyderabad
12 - 15 yrs
₹45L - ₹48L / yr
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Microsoft Windows Azure
Cloud Computing
Microservices
+11 more

Job Summary:

As a Cloud Architect at organization, you will play a pivotal role in designing, implementing, and maintaining our multi-cloud infrastructure. You will work closely with various teams to ensure our cloud solutions are scalable, secure, and efficient across different cloud providers. Your expertise in multi-cloud strategies, database management, and microservices architecture will be essential to our success.


Key Responsibilities:

  • Design and implement scalable, secure, and high-performance cloud architectures across multiple cloud platforms (AWS, Azure, Google Cloud Platform).
  • Lead and manage cloud migration projects, ensuring seamless transitions between on-premises and cloud environments.
  • Develop and maintain cloud-native solutions leveraging services from various cloud providers.
  • Architect and deploy microservices using REST, GraphQL to support our application development needs.
  • Collaborate with DevOps and development teams to ensure best practices in continuous integration and deployment (CI/CD).
  • Provide guidance on database architecture, including relational and NoSQL databases, ensuring optimal performance and security.
  • Implement robust security practices and policies to protect cloud environments and data.
  • Design and implement data management strategies, including data governance, data integration, and data security.
  • Stay-up-to-date with the latest industry trends and emerging technologies to drive continuous improvement and innovation.
  • Troubleshoot and resolve cloud infrastructure issues, ensuring high availability and reliability.
  • Optimize cost and performance across different cloud environments.


Qualifications/ Experience & Skills Required:

  • Bachelor's degree in Computer Science, Information Technology, or a related field.
  • Experience: 10 - 15 Years
  • Proven experience as a Cloud Architect or in a similar role, with a strong focus on multi-cloud environments.
  • Expertise in cloud migration projects, both lift-and-shift and greenfield implementations.
  • Strong knowledge of cloud-native solutions and microservices architecture.
  • Proficiency in using GraphQL for designing and implementing APIs.
  • Solid understanding of database technologies, including SQL, NoSQL, and cloud-based database solutions.
  • Experience with DevOps practices and tools, including CI/CD pipelines.
  • Excellent problem-solving skills and ability to troubleshoot complex issues.
  • Strong communication and collaboration skills, with the ability to work effectively in a team environment.
  • Deep understanding of cloud security practices and data protection regulations (e.g., GDPR, HIPAA).
  • Experience with data management, including data governance, data integration, and data security.


Preferred Skills:

  • Certifications in multiple cloud platforms (e.g., AWS Certified Solutions Architect, Google Certified Professional Cloud Architect, Microsoft Certified: Azure Solutions Architect).
  • Experience with containerization technologies (Docker, Kubernetes).
  • Familiarity with cloud cost management and optimization tools.
Read more
the forefront of innovation in the digital video industry

the forefront of innovation in the digital video industry

Agency job
via HyrHub by Neha Koshy
Bengaluru (Bangalore)
7 - 10 yrs
₹15L - ₹30L / yr
skill iconHTML/CSS
skill iconJavascript
skill iconAngular (2+)
skill iconAngularJS (1.x)
ASP.NET
+11 more

Responsibilities:

  • Work with development teams and product managers to ideate software solutions
  • Design client-side and server-side architecture
  • Creating a well-informed cloud strategy and managing the adaptation process
  • Evaluating cloud applications, hardware, and software
  • Develop and manage well-functioning databases and applications Write effective APIs
  • Participate in the entire application lifecycle, focusing on coding and debugging
  • Write clean code to develop, maintain and manage functional web applications
  • Get feedback from, and build solutions for, users and customers
  • Participate in requirements, design, and code reviews
  • Engage with customers to understand and solve their issues
  • Collaborate with remote team on implementing new requirements and solving customer problems
  • Focus on quality of deliverables with high accountability and commitment to program objectives

Required Skills:

  • 7– 10 years of SW development experience
  • Experience using Amazon Web Services (AWS), Microsoft Azure, Google Cloud, or other major cloud computing services.
  • Strong skills in Containers, Kubernetes, Helm
  • Proficiency in C#, .NET, PHP /Java technologies with an acumen for code analysis, debugging and problem solving
  • Strong skills in Database Design(PostgreSQL or MySQL)
  • Experience in Caching and message Queue
  • Experience in REST API framework design
  • Strong focus on high-quality and maintainable code
  • Understanding of multithreading, memory management, object-oriented programming

Preferred skills:

  • Experience in working with Linux OS
  • Experience in Core Java programming
  • Experience in working with JSP/Servlets, Struts, Spring / Spring Boot, Hibernate
  • Experience in working with web technologies HTML,CSS
  • Knowledge of source versioning tools particularly JIRA, Git, Stash, and Jenkins.
  • Domain Knowledge of Video, Audio Codecs
Read more
Fynd

at Fynd

3 recruiters
Akshata Kadam
Posted by Akshata Kadam
Mumbai
5 - 8 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+6 more

Fynd is India’s largest omnichannel platform and multi-platform tech company with expertise in retail tech and products in AI, ML, big data ops, gaming+crypto, image editing and learning space. Founded in 2012 by 3 IIT Bombay alumni: Farooq Adam, Harsh Shah and Sreeraman MG. We are headquartered in Mumbai and have 1000+ brands under management, more than 10k stores and servicing 23k + pin codes.


We're looking for an SDE I/ SDE II- DevSecOps to join our Engineering Team. The team builds products for 10M+ Fynd users and internal teams. Our team consists of generalist engineers who work on building modern websites (SPA & Isomorphic), mobile apps for Android & iOS, REST APIs and servers, internal tools, and infrastructure for all our users.


What will you do at Fynd?

  • Build a Culture around Security Engineering at Fynd</li><li>Ensure that a healthy security posture is maintained by continuously assessing/monitoring perimeter as well as internal security posture.
  • Identify, integrate, monitor, and improve InfoSec controls by understanding business processes.
  • Drive a DevSecOps culture in the organization by implementing shift left security culture.
  • Conduct security reviews, auditing, penetration testing, risk assessments, vulnerability assessments, threat modeling.
  • Install, configure, manage, and maintain mission-critical enterprise applications such as AV, patching, SIEM, DLP, log management and other technical controls. Troubleshoot security system and related issues
  • Should have good understanding in working on CSPM
  • Should have good understanding in different Services of AWS & GCP, Also need someone who should know DNS.
  • Improve Cloud, Application ,Kafka, Database security posture and Kubernetes security using CI/CD Understand by regular gap assessment, Provide support in detection and mitigation of cyber security vulnerability and incidents for Cloud
  • Run security automation tools for periodic scans - SAST, DAST, Infrastructure scanning, Compliance check 
  • Adhere to OWASP guidelines and bring the OWASP maturity model at organisation level.
  • Strong understanding of network concepts including TCP/IP, HTTP and TLS, DDoS detection/prevention, and network and host anomaly detection through both automated (NIDS/HIDS) and manual means.
  • A good knack for automating infrastructure security as much as possible


Some specific requirements

  • Need to have a professional experience of at least 3-4 years acquired in monitoring and improving DevSec Ops tools and processes
  • Extensive knowledge in assurance tools such as Fortify, OWASP ZAP, Sonarqube, Open source automation tools and their integrations into CI/CD cycles.
  • Understanding of Zero Trust policy and its implementation.
  • Identify security weakness across multiple programming languages like Python, Node JS, Java, Go, Javascript, HTML etc
  • Participate in incident handling and other related duties to support the information security function.
  • Ability to drive security automation and DevSecOps within engineering life cycle, as well as vulnerability/bug remediation
  • Good to have audit experience across compliance certifications like ISO 27001/ISMS/PCI DSS / SoC 2
  • Experience in Kubernetes Infra, Cloud deployment technologies - AWS, GCP
Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
Shivani Kawade
Posted by Shivani Kawade
Remote, Pune
2 - 6 yrs
₹8L - ₹25L / yr
SQL Azure
databricks
skill iconPython
SQL
ETL
+9 more

TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.


We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.


We are seeking a skilled and motivated senior Data Engineer from the manufacturing Industry with over four years of experience to join our team. The Senior Data Engineer will oversee the department’s data infrastructure, including developing a data model, integrating large amounts of data from different systems, building & enhancing a data lake-house & subsequent analytics environment, and writing scripts to facilitate data analysis. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.


Skills Required:


  • Experience in the manufacturing industry (metal industry is a plus)
  • 4+ years of experience as a Data Engineer
  • Experience in data cleaning & structuring and data manipulation
  • Architect and optimize complex data pipelines, leading the design and implementation of scalable data infrastructure, and ensuring data quality and reliability at scale
  • ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
  • Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
  • Experience in SQL and data structures
  • Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache, and NoSQL databases.
  • Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
  • Proficient in data management and data governance
  • Strong analytical experience & skills that can extract actionable insights from raw data to help improve the business.
  • Strong analytical and problem-solving skills.
  • Excellent communication and teamwork abilities.


Nice To Have:

  • Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
  • Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
  • Bachelor’s degree in computer science, Information Technology, Engineering, or a related field from top-tier Indian Institutes of Information Technology (IIITs).
  • Benefits And Perks
  • A culture that fosters innovation, creativity, continuous learning, and resilience
  • Progressive leave policy promoting work-life balance
  • Mentorship opportunities with highly qualified internal resources and industry-driven programs
  • Multicultural peer groups and supportive workplace policies
  • Annual workcation program allowing you to work from various scenic locations
  • Experience the unique environment of a dynamic start-up


Why should you join TVARIT ?


Working at TVARIT, a deep-tech German IT startup, offers a unique blend of innovation, collaboration, and growth opportunities. We seek individuals eager to adapt and thrive in a rapidly evolving environment.


If this opportunity excites you and aligns with your career aspirations, we encourage you to apply today!

Read more
Smartavya

Smartavya

Agency job
via Pluginlive by Harsha Saggi
Mumbai
10 - 18 yrs
₹35L - ₹40L / yr
Hadoop
Architecture
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
PySpark
+13 more
  • Architectural Leadership:
  • Design and architect robust, scalable, and high-performance Hadoop solutions.
  • Define and implement data architecture strategies, standards, and processes.
  • Collaborate with senior leadership to align data strategies with business goals.
  • Technical Expertise:
  • Develop and maintain complex data processing systems using Hadoop and its ecosystem (HDFS, YARN, MapReduce, Hive, HBase, Pig, etc.).
  • Ensure optimal performance and scalability of Hadoop clusters.
  • Oversee the integration of Hadoop solutions with existing data systems and third-party applications.
  • Strategic Planning:
  • Develop long-term plans for data architecture, considering emerging technologies and future trends.
  • Evaluate and recommend new technologies and tools to enhance the Hadoop ecosystem.
  • Lead the adoption of big data best practices and methodologies.
  • Team Leadership and Collaboration:
  • Mentor and guide data engineers and developers, fostering a culture of continuous improvement.
  • Work closely with data scientists, analysts, and other stakeholders to understand requirements and deliver high-quality solutions.
  • Ensure effective communication and collaboration across all teams involved in data projects.
  • Project Management:
  • Lead large-scale data projects from inception to completion, ensuring timely delivery and high quality.
  • Manage project resources, budgets, and timelines effectively.
  • Monitor project progress and address any issues or risks promptly.
  • Data Governance and Security:
  • Implement robust data governance policies and procedures to ensure data quality and compliance.
  • Ensure data security and privacy by implementing appropriate measures and controls.
  • Conduct regular audits and reviews of data systems to ensure compliance with industry standards and regulations.
Read more
Cargill Business Services
Vignesh R
Posted by Vignesh R
Bengaluru (Bangalore)
4 - 7 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more

Job Purpose and Impact

The DevOps Engineer is a key position to strengthen the security automation capabilities which have been identified as a critical area for growth and specialization within Global IT’s scope. As part of the Cyber Intelligence Operation’s DevOps Team, you will be helping shape our automation efforts by building, maintaining and supporting our security infrastructure.

Key Accountabilities

  • Collaborate with internal and external partners to understand and evaluate business requirements.
  • Implement modern engineering practices to ensure product quality.
  • Provide designs, prototypes and implementations incorporating software engineering best practices, tools and monitoring according to industry standards.
  • Write well-designed, testable and efficient code using full-stack engineering capability.
  • Integrate software components into a fully functional software system.
  • Independently solve moderately complex issues with minimal supervision, while escalating more complex issues to appropriate staff.
  • Proficiency in at least one configuration management or orchestration tool, such as Ansible.
  • Experience with cloud monitoring and logging services.

Qualifications

Minimum Qualifications

  • Bachelor's degree in a related field or equivalent exp
  • Knowledge of public cloud services & application programming interfaces
  • Working exp with continuous integration and delivery practices

Preferred Qualifications

  • 3-5 years of relevant exp whether in IT, IS, or software development
  • Exp in:
  •  Code repositories such as Git
  • Scripting languages (Python & PowerShell)
  • Using Windows, Linux, Unix, and mobile platforms within cloud services such as AWS
  • Cloud infrastructure as a service (IaaS) / platform as a service (PaaS), microservices, Docker containers, Kubernetes, Terraform, Jenkins
  • Databases such as Postgres, SQL, Elastic
Read more
Hexpress Healthcare
Vadodara
5 - 7 yrs
₹10L - ₹15L / yr
skill iconGit
skill iconKubernetes
Google Cloud Platform (GCP)
CI/CD
Red Hat Linux

About Us

Hexpress Healthcare Softech is a leading IT healthcare services provider based in Vadodara. We belongs to the same group as with Hexpress Healthcare Limited, a leading online healthcare provider based in London. We supply patients with health solutions and medical advice across multiple countries worldwide, continuously expanding into new markets. We were the first to provide an online consultation and prescription service, allowing our customers to safely order and receive medication from their homes. The Hexpress Healthcare group incorporates trusted brands such as HealthExpress, euroClinix and 121doc.

 

Hexpress Healthcare has been at the forefront of innovation in the e-health sector since our launch. Patient safety has always been one of our top priorities to guarantee service excellence. We combine excellent industry expertise with technical know-how and work closely with several medical and data protection regulators.

 

We have always set ourselves apart through the quality of our service. All our staff are extensively trained in best practices, ensuring effectiveness, efficiency and professionalism when dealing with sensitive information. Hexpress is proud to work with a highly professional and fully trained team.


Job Description

Summary:

We are seeking a qualified Cloud Server Administrator to join our team and play a vital role in managing and maintaining our cloud infrastructure. As a cloud servers administrator, you will be responsible for ensuring the optimal performance, security, and efficiency of our cloud-based servers. 

You will work closely with other IT professionals, developers, and operations teams to ensure seamless integration of cloud services within the organization. The candidate is expected to demonstrate proficiency in leveraging AI technologies to enhance the efficiency and effectiveness of job performance.


Key Responsibilities

  • Installation, configuration, and maintenance of RHEL/Rocky (primarily) and Ubuntu cloud systems
  • Managing Google Cloud VMs, storage, databases and other Google Cloud Platform services
  • OS updates, security hardening, system automation, network management and database management 
  • Application deployment and management of 'Gitlab server' based development environments with CI/CD
  • Perform regular backups and disaster recovery procedures
  • Manage and maintain server performance, security, and functionality
  • Manage user access and permissions for cloud services
  • Monitor system activity to identify and troubleshoot server issues
  • Monitor cloud resource utilization and cost optimization
  • Perform regular security audits and updates to safeguard against cyber threats
  • Document server configurations and procedures for future reference
  • Stay up-to-date on the latest server technologies and best practices
  • Collaborate with IT support teams and other departments to resolve technical issues


Requirements


Technical Skills

  • B.E./ B.Tech./ M.E./ M.Tech./ MCA/ Msc.IT or related field
  • Minimum of 5 years of experience in cloud administration or a similar IT role
  • Experience with one or more major cloud platforms (e.g. AWS, Microsoft Azure, Google Cloud Platform)
  • Strong understanding of server operating systems (e.g. RHEL/RockyLinux, Ubuntu)
  • Excellent knowledge of network security principles and best practices
  • Proficiency in scripting languages such as Bash(must), Python/Go/C++/PHP (bonus) is required for automation and integration tasks
  • Proven ability to troubleshoot and resolve complex server issues
  • Strong analytical and problem-solving skills
  • Excellent communication and interpersonal skills


Nice to Have Skills


  • Experience with containerization technologies (e.g. Docker, Kubernetes)
  • Knowledge of DevOps principles and practice
  • Certifications in cloud platforms (e.g. AWS, GCP, Azure)
  • Understanding git source code management tool
  • Experience with Agile methodologies


Benefits

  • Service recognition awards.
  • Market-leading salary packages.
  • Maternity & Paternity Benefits
  • Medical Insurance Benefits


Read more
FiftyFive Technologies Pvt Ltd
Tanvi Bhatia
Posted by Tanvi Bhatia
Remote only
4 - 8 yrs
₹7L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Job Title :- Senior DevOps Engineer

 

Exp :- 4+ years

Location :- Pan India, Jaipur, Indore, Gurgaon

Position :- 2


Job Summary:

 

We are seeking a skilled and motivated DevOps Engineer with 4+ years of experience to join our dynamic team. The ideal candidate will have a strong background in AWS cloud platforms and a passion for continuous integration, continuous delivery, and automation. As a DevOps Engineer, you will play a critical role in managing our cloud infrastructure, optimizing deployment pipelines, and ensuring the reliability and scalability of our systems.

 

Key Responsibilities:

  • Cloud Infrastructure Management: Design, implement, and manage scalable, secure, and highly available AWS cloud infrastructure.
  • Monitor and optimize cloud resources to ensure efficient utilization and cost management.
  • CI/CD Pipelines: Develop and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI, or AWS Code Pipeline. Automate build, test, and deployment processes to improve efficiency and reduce errors.
  • Configuration Management: Utilize configuration management tools like Ansible, Chef, or Puppet to automate provisioning and configuration of infrastructure. Ensure consistency and repeatability of environments across development, staging, and production.
  • Monitoring and Logging: Implement and maintain monitoring and logging solutions to ensure the health and performance of applications and infrastructure. Use tools such as CloudWatch, Prometheus, Grafana, or ELK stack to proactively identify and resolve issues.
  • Security and Compliance: Implement best practices for cloud security, including IAM policies, security groups, and data encryption. Ensure compliance with industry standards and regulations, conducting regular security audits and vulnerability assessments.
  • Collaboration and Support: Work closely with development, QA, and operations teams to ensure seamless integration and delivery of applications. Provide support for infrastructure-related issues and incidents, participating in on-call rotations as needed.

 

Qualifications:

 

  • Education: Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent experience).
  • Experience: Minimum of 4 years of experience in a DevOps role with a focus on AWS cloud services.
  • Technical Skills: 
  • Proficiency in AWS services such as EC2, S3, RDS, Lambda, VPC, CloudFormation, and IAM.
  • Experience with containerization and orchestration tools like Docker and Kubernetes.
  • Strong scripting skills in languages such as Python, Bash, or Ruby.
  • Familiarity with infrastructure as code (IaC) tools like Terraform or CloudFormation.
  • Knowledge of version control systems such as Git.
  • Soft Skills:
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration abilities.
  • Ability to work in a fast-paced, dynamic environment and handle multiple tasks simultaneously.

 

Preferred Qualifications:

  • AWS Certified DevOps Engineer or similar certifications.
  • Experience with other cloud platforms (e.g., Azure, GCP) is a plus.
  • Knowledge of agile methodologies and experience working in agile teams.


Read more
Bengaluru (Bangalore)
15 - 25 yrs
₹3L - ₹20L / yr
Channel Sales
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Windows Azure
SaaS
+1 more

Job Description - Manager Sales

Min 15 years experience,

Should be experience from Sales of Cloud IT Saas Products portfolio which Savex deals with,

Team Management experience, leading cloud business including teams

Sales manager - Cloud Solutions

Reporting to Sr Management

Good personality

Distribution backgroung

Keen on Channel partners

Good database of OEMs and channel partners.

Age group - 35 to 45yrs

Male Candidate

Good communication

B2B Channel Sales

Location - Bangalore


If interested reply with cv and below details


Total exp -

Current ctc - 

Exp ctc - 

Np -

Current location - 

Qualification - 

Total exp Channel Sales -

What are the Cloud IT products, you have done sales for? 

What is the Annual revenue generated through Sales ?

Read more
MangoApps

at MangoApps

29 recruiters
Dhhruval Modi
Posted by Dhhruval Modi
Pune
5 - 9 yrs
₹18L - ₹35L / yr
Microsoft Windows Azure
Google Cloud Platform (GCP)
Terraform
CloudWatch
Linux/Unix

About the job


MangoApps builds enterprise products that make employees at organizations across the globe

more effective and productive in their day-to-day work. We seek tech pros, great


communicators, collaborators, and efficient team players for this role.


Job Description:


Experience: 5+yrs (Relevant experience as a SRE)


Open positions: 2


Job Responsibilities as a SRE


  • Must have very strong experience in Linux (Ubuntu) administration
  • Strong in network troubleshooting
  • Experienced in handling and diagnosing the root cause of compute and database outages
  • Strong experience required with cloud platforms, specifically Azure or GCP (proficiency in at least one is mandatory)
  • Must have very strong experience in designing, implementing, and maintaining highly available and scalable systems
  • Must have expertise in CloudWatch or similar log systems and troubleshooting using them
  • Proficiency in scripting and programming languages such as Python, Go, or Bash is essential
  • Familiarity with configuration management tools such as Ansible, Puppet, or Chef is required
  • Must possess knowledge of database/SQL optimization and performance tuning.
  • Respond promptly to and resolve incidents to minimize downtime
  • Implement and manage infrastructure using IaC tools like Terraform, Ansible, or Cloud Formation
  • Excellent problem-solving skills with a proactive approach to identifying and resolving issues are essential.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Seema Srivastava
Posted by Seema Srivastava
Mumbai, Bengaluru (Bangalore)
5 - 10 yrs
Best in industry
skill iconJava
J2EE
skill iconSpring Boot
Hibernate (Java)
Microservices
+4 more

Experience: 5+ Years


• Experience in Core Java, Spring Boot

• Experience in microservices and angular

• Extensive experience in developing enterprise-scale systems for global organization. Should possess good architectural knowledge and be aware of enterprise application design patterns.

• Should be able to analyze, design, develop and test complex, low-latency client-facing applications.

• Good development experience with RDBMS in SQL Server, Postgres, Oracle or DB2

• Good knowledge of multi-threading

• Basic working knowledge of Unix/Linux

• Excellent problem solving and coding skills in Java

• Strong interpersonal, communication and analytical skills.

• Should be able to express their design ideas and thoughts

Read more
Elysium Labs Pvt Ltd
Avani Mathur
Posted by Avani Mathur
Remote only
0 - 2 yrs
₹10L - ₹12L / yr
skill iconPHP
SQL
SQL server
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

We are a technology company operating in the media space. We are the pioneers of robot journalism in India. We use the mix of AI-generated and human-edited content, across media formats, be it audio, video or text.

Our key products include India’s first explanatory journalism portal (NewsBytes), a content platform for developers (DevBytes), and a SaaS platform for content creators (YANTRA).

Our B2C media products are consumed by more than 50 million users in a month, while our AI-driven B2B content engine helps companies create text-based content at scale.

The company was started by IIT, IIM Ahmedabad alumni and Cornell University. It has raised institutional financing from well-renowned media-tech VC and a Germany-based media conglomerate.

We are hiring a talented DevOps Engineer with 3+ years of experience to join our team. If you're excited to be part of a winning team, we are a great place to grow your career.

Responsibilities

●       Handle and optimise cloud (servers and CDN)

●       Build monitoring tools for the infrastructure

●       Perform a granular level of analysis and optimise usage

●       Help migrate from a single cloud environment to multi-cloud strategy

●       Monitor threats and explore building a protection layer

●       Develop scripts to automate certain aspects of the deployment process

 

Requirements and Skills

●       0-2 years of experience as a DevOps Engineer

●       Proficient with AWS and GCP

●       A certification from relevant cloud companies

●       Knowledge of PHP will be an advantage

●       Working knowledge of databases and SQL

 

Read more
BigThinkCode Technologies
Sweety Madona
Posted by Sweety Madona
Chennai
2.8 - 4 yrs
₹5L - ₹12L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

At BigThinkCode, our technology solves complex problems. We are looking for talented Cloud Devops engineer to join our Cloud Infrastructure team at Chennai.


Our ideal candidate will have expert knowledge of software development processes, OS, troubleshooting, infrastructure environment set-up and problem-solving skills. This is an opportunity to join a growing team and make a substantial impact at BigThinkCode Technologies.


Please find below our job description, if interested apply / reply sharing your profile to connect and discuss.


Company: BigThinkCode Technologies

URL: https://www.bigthinkcode.com/

Experience: 2.8 – 4 years

Level: Devops Engineer, Senior

Location: Chennai (Work from Office)

Joining time: Immediate – 4 weeks of time.

 

Responsibilities of DevOps / Cloud Engineer include:

·       Understanding customer requirements and project KPIs.

·       Setting up tools and required infrastructure.

·       Defining and setting development, test, release, update, and support processes for DevOps operation.

·       Have the technical skill to review, verify, and validate the devops related implementation in the project.

·       Troubleshooting techniques and fixing the bugs.

·       Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement and minimizing the wastage.

·       Encouraging and building automated processes wherever possible.

·       Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management.

·       Incidence management and root cause analysis.

·       Coordination and communication within the team and with customers.

·       Selecting and deploying appropriate CI/CD tools.

·       Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline).

·       Monitoring and measuring customer experience and KPIs

·       Managing periodic reporting on the progress to the management and the customer


Required skills:

·       4+ years of experience in provisioning, operations, and management of cloud and on-prem environments.

·       Demonstrated competency with the any one of Cloud technologies like AWS or GCP.

·       Demonstrated competency with On-Prem deployment techniques.

·       Experience in code development in at least one scripting languages like Shell, Powershell and Python.

·       Knowledge of operating system administration.

·       Experience in creation of highly automated infrastructures.

·       Comprehensive knowledge regarding contemporary processes and methodologies for development and operations.

·       Strong understanding of how to secure Cloud (AWS or AGCP) environments and meet compliance requirements.

·       Cloud (AWS or GCP) Disaster Recovery design and deployment across regions a plus.

·       Experience with multi-tier architectures: load balancers, caching, web servers, application servers, databases, and networking.

·       Clear written and verbal communication.

·       Manage your own time and work well both independently and as part of a team

·       Understanding of Rest APIs, Big Data Processing, Rules Engines to orchestrate the calls to the Rest APIs and other data sources like Kafka, Snowflake, AWS S3.

·       Maintain the organization standards related to ISO:ISMS principles.

·       Desired tooling: Ansible, chef, terraform or cloud formation experience.

·       Developing our release management and upgrade infrastructure

·       Developing configuration and integration tools with customer IT systems

·       Playing a key role in defining and implementing security practices

·       Understand how to translate business requirements and “user needs” into code.

·       You take pride in designing solutions that will outlive the problem.


Read more
Delhi
6 - 9 yrs
₹6L - ₹10L / yr
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconHTML/CSS
+8 more

Job Title: Javascript Developers (Full-Stack Web)

On-site Location: NCTE, Dwarka, Delhi

Job Type: Full-Time

Company: Bharattech AI Pvt Ltd


Eligibility:

  • 6 years of experience (minimum)
  • B.E/B.Tech/M.E/M.Tech -or- MCA -or- M.Sc(IT or CS) -or- MS in Software Systems


About the Company:

Bharattech AI Pvt Ltd is a leader in providing innovative AI and data analytics solutions. We have partnered with the National Council for Teacher Education (NCTE), Delhi, to implement and develop their data analytics & MIS development lab, called VSK. We are looking for skilled Javascript Developers (Full-Stack Web) to join our team and contribute to this prestigious project.


Job Description:

Bharattech AI Pvt Ltd is seeking two Javascript Developers (Full-Stack Web) to join our team for an exciting project with NCTE, Delhi. As a Full-Stack Developer, you will play a crucial role in the development and integration of the VSK Web application and related systems.


Work Experience:

  • Minimum 6 years' experience in Web apps, PWAs, Dashboards, or Website Development.
  • Proven experience in the complete lifecycle of web application development.
  • Demonstrated experience as a full-stack developer.
  • Knowledge of either MERN, MEVN, or MEAN stack.
  • Knowledge of popular frameworks (Express/Meteor/React/Vue/Angular etc.) for any of the stacks mentioned above.


Role and Responsibilities:

  • Study the readily available client datasets and leverage them to run the VSK smoothly.
  • Communicate with the Team Lead and Project Manager to capture software requirements.
  • Develop high-level system design diagrams for program design, coding, testing, debugging, and documentation.
  • Develop, update, and modify the VSK Web application/Web portal.
  • Integrate existing software/applications with VSK using readily available APIs.


Skills and Competencies:

- Proficiency in full-stack development, including both front-end and back-end technologies.

- Strong knowledge of web application frameworks and development tools.

- Experience with API integration and software development best practices.

- Excellent problem-solving skills and attention to detail.

- Strong communication skills and the ability to work effectively in a team environment.


Why Join Us:

- Be a part of a cutting-edge project with a significant impact on the education sector.

- Work in a dynamic and collaborative environment with opportunities for professional growth.

- Competitive salary and benefits package.


Join Bharattech AI Pvt Ltd and contribute to transforming technological development at NCTE, Delhi!


Bharattech AI Pvt Ltd is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Read more
Datapure Technologies Pvt. Ltd.
Vidhi Saxena
Posted by Vidhi Saxena
Indore
7 - 12 yrs
₹10L - ₹20L / yr
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconExpress
+3 more

Responsibilities:

  • You will develop tools and applications aligning to the best coding practices. 
  • You will perform technical analysis, design, development, and implementation of projects. 
  • You will write clear quality code for software and applications and perform test reviews. 
  • You will detect and troubleshoot software issues 
  • You will develop, implement, and test APIs 
  • You will adhere to industry best practices and contribute to internal coding standards 
  • You will manipulate images and videos based on project requirements.

Requirements:

  •  You have a strong passion for start-ups and the proactiveness to deliver 
  • You have hands-on experience building services using NodeJs, ExpressJs technologies 
  • You have hands-on experience of Mongo DB(NoSQL/SQL)database technologies. 
  • You are good at web technologies like React JS/Next JS, JavaScript, Typescript 
  • You are good at web technologies like Restful/SOAP web services 
  • You are good at caching and third-party integration 
  • You are strong in debugging and troubleshooting skills 
  • Experience with either AWS (Amazon Web Services) or GCP (Google Cloud Platform) 
  • If you have Knowledge of Python, and Chrome extension & DevOps development is a plus. 
  • You must be proficient in building scalable backend infrastructure software or distributed systems with exposure to Front-end and backend libraries/frameworks. 
  •  Experience with Databases and microservices architecture is an advantage 
  • You should be able to push your limits and go beyond your role to scale the product 

Go-getter attitude and can drive progress with very little guidance and short turnaround time

Read more
Fynd

at Fynd

3 recruiters
Akshata Kadam
Posted by Akshata Kadam
Mumbai
3 - 8 yrs
Best in industry
skill iconPython
skill iconDjango
skill iconFlask
RESTful APIs
skill iconMongoDB
+2 more

Fynd is India’s largest omnichannel platform and multi-platform tech company with expertise in retail tech and products in AI, ML, big data ops, gaming+crypto, image editing and learning space. Founded in 2012 by 3 IIT Bombay alumni: Farooq Adam, Harsh Shah and Sreeraman MG. We are headquartered in Mumbai and have 1000+ brands under management, more than 10k stores and servicing 23k + pin codes.



What will you do at Fynd?


  • Build scalable services to extend our platform
  • Build bulletproof API integrations with third-party APIs for various use cases
  • Evolve our Infrastructure and add a few more nines to our overall availability
  • Have full autonomy and own your code, and decide on the technologies and tools to deliver as well operate large-scale applications on AWS
  • Give back to the open-source community through contributions on code and blog posts
  • This is a startup so everything can change as we experiment with more product improvements


Some specific Requirements:


  • You know how to take ownership on things and get it done end to end
  • You have prior experience developing and working on consumer-facing web/app products
  • Hands-on experience in Python, know in depth on asyncio, generators and use case in event driven scenario
  • Through knowledge of async programming using Callbacks, Promises, and Async/Await
  • Someone from an Andriod development background would be preferred.
  • Good Working knowledge of MongoDB, Redis, PostgreSQL
  • Good understanding of Data Structures, Algorithms, and Operating Systems
  • You've worked with AWS services in the past and have experience with EC2, ELB, AutoScaling, CloudFront, S3
  • Experience with Frontend Stack would be added advantage (HTML, CSS)
  • You might not have experience with all the tools that we use but you can learn those given the guidance and resources



What do we offer?


Growth

Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially.


Flex University

We help you upskill by organising in-house courses on important subjects

Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you.


Culture

Community and Team building activities

Host weekly, quarterly and annual events/parties.


Wellness

Mediclaim policy for you + parents + spouse + kids

Experienced therapist for better mental health, improve productivity & work-life balance 


We work 5 days from the office and we make sure people have everything they need:-

Free meals

Snacks, goodies & a lot of fun culture

Read more
UpSolve Solutions LLP
Shaurya Kuchhal
Posted by Shaurya Kuchhal
Mumbai
1 - 4 yrs
₹3L - ₹5L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
skill iconData Analytics
+2 more

Role Description

This is a full-time client facing on-site role for a Data Scientist at UpSolve Solutions in Mumbai. The Data Scientist will be responsible for performing various day-to-day tasks, including data science, statistics, data analytics, data visualization, and data analysis. The role involves utilizing these skills to provide actionable insights to drive business decisions and solve complex problems.


Qualifications

  • Data Science, Statistics, and Data Analytics skills
  • Data Visualization and Data Analysis skills
  • Strong problem-solving and critical thinking abilities
  • Ability to work with large datasets and perform data preprocessing
  • Proficiency in programming languages such as Python or R
  • Experience with machine learning algorithms and predictive modeling
  • Excellent communication and presentation skills
  • Bachelor's or Master's degree in a relevant field (e.g., Computer Science, Statistics, Data Science)
  • Experience in the field of video and text analytics is a plus


Read more
Greyblue Ventures
Shabina Khan
Posted by Shabina Khan
Gurugram
2 - 5 yrs
₹8L - ₹12L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more

Key Responsibilities: 

  • Develop and Maintain CI/CD Pipelines: Design, implement, and manage CI/CD pipelines using GitOps practices. 
  • Kubernetes Management: Deploy, manage, and troubleshoot Kubernetes clusters to ensure high availability and scalability of applications. 
  • Cloud Infrastructure: Design, deploy, and manage cloud infrastructure on AWS, utilizing services such as EC2, S3, RDS, Lambda, and others. 
  • Infrastructure as Code: Implement and manage infrastructure using IaC tools like Terraform, CloudFormation, or similar. 
  • Monitoring and Logging: Set up and manage monitoring, logging, and alerting systems to ensure the health and performance of the infrastructure. 
  • Automation: Identify and automate repetitive tasks to improve efficiency and reliability. 
  • Security: Implement security best practices and ensure compliance with industry standards. 
  • Collaboration: Work closely with development, QA, and operations teams to ensure seamless integration and delivery of products. 

Required Skills and Qualifications: 

  • Experience: 2-5 years of experience in a DevOps role. 
  • AWS: In-depth knowledge of AWS services and solutions. 
  • CI/CD Tools: Experience with CI/CD tools such as Jenkins, GitLab CI, CircleCI, or similar. 
  • GitOps Expertise: Proficient in GitOps methodologies and tools. 
  • Kubernetes: Strong hands-on experience with Kubernetes and container orchestration. 
  • Scripting and Automation: Proficient in scripting languages such as Bash, Python, or similar. 
  • Infrastructure as Code (IaC): Hands-on experience with IaC tools like Terraform, CloudFormation, or similar. 
  • Monitoring Tools: Familiarity with monitoring and logging tools like Prometheus, Grafana, ELK stack, or similar. 
  • Version Control: Strong understanding of version control systems, primarily Git. 
  • Problem-Solving: Excellent problem-solving and debugging skills. 
  • Collaboration: Ability to work in a fast-paced, collaborative environment. 
  • Education: Bachelor’s or master’s degree in computer science or a related field. 


Read more
Molecular Connections

at Molecular Connections

4 recruiters
Molecular Connections
Posted by Molecular Connections
Bengaluru (Bangalore)
3 - 5 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
Windows Azure
Google Cloud Platform (GCP)
+1 more

Responsibilities:

- Design, implement, and maintain robust CI/CD pipelines using Azure DevOps for continuous integration and continuous delivery (CI/CD) of software applications.

- Provision and manage infrastructure resources on Microsoft Azure, including virtual machines, containers, storage, and networking components.

-  Implement and manage Kubernetes clusters for containerized application deployments and orchestration.

-  Configure and utilize Azure Container Registry (ACR) for secure container image storage and management.

-  Automate infrastructure provisioning and configuration management using tools like Azure Resource Manager (ARM) templates.

- Monitor application performance and identify potential bottlenecks using Azure monitoring tools.

- Collaborate with developers and operations teams to identify and implement continuous improvement opportunities for the DevOps process.

- Troubleshoot and resolve DevOps-related issues, ensuring smooth and efficient software delivery.

- Stay up-to-date with the latest advancements in cloud technologies, DevOps tools, and best practices.

- Maintain a strong focus on security throughout the software delivery lifecycle.

- Participate in code reviews to identify potential infrastructure and deployment issues.

-  Effectively communicate with technical and non-technical audiences on DevOps processes and initiatives.

Qualifications:

- Proven experience in designing and implementing CI/CD pipelines using Azure DevOps.

- In-depth knowledge of Microsoft Azure cloud platform services (IaaS, PaaS, SaaS).

- Expertise in deploying and managing containerized applications using Kubernetes.

-  Experience with Infrastructure as Code (IaC) tools like ARM templates.

- Familiarity with Azure monitoring tools and troubleshooting techniques.

-  A strong understanding of DevOps principles and methodologies (Agile, Lean).

-  Excellent problem-solving and analytical skills.

-   Ability to work independently and as part of a team.

-   Strong written and verbal communication skills.

-   A minimum of one relevant Microsoft certification (e.g., Azure Administrator Associate, DevOps Engineer Expert) is highly preferred.


Read more
Datapure Technologies Pvt. Ltd.
Vidhi Saxena
Posted by Vidhi Saxena
Indore
2 - 4 yrs
₹4L - ₹6L / yr
skill iconKotlin
skill iconJava
skill iconPHP
skill iconNodeJS (Node.js)
skill iconXML
+3 more

"Preferred candidates who are in Indore"


Responsibilities:

● Design and develop advanced applications for the Android platform, ensuring high performance, responsiveness, and user-friendly interfaces.

● Collaborate with product managers, designers, and backend engineers to define project requirements and deliver innovative solutions.

● Implement and maintain backend services, APIs, and databases to support mobile applications.

● Conduct thorough testing and debugging to ensure application stability and performance optimization.

● Stay updated with the latest industry trends, technologies, and best practices to continuously improve development processes.

● Participate in code reviews, provide constructive feedback, and mentor junior team members when necessary.


Requirements:

● Bachelor's or Master's degree in Computer Science, Engineering, or related field.

● 2+years of professional experience as an Android developer, with expertise in Kotlin and Java programming languages.

● Strong understanding of Android SDK, different versions of Android, and familiarity with Material Design principles. ● Proficiency in frontend technologies such as XML, JSON, and third-party libraries.

● Experience with backend development using technologies like Node.js, Python, or PHP.

● Knowledge of database management systems (e.g., MySQL, MongoDB) and experience in designing schemas and optimizing queries.

● Excellent problem-solving skills, ability to work independently, and a passion for learning new technologies.

● Good communication skills and ability to collaborate effectively within a team environment.


Preferred Qualifications:

● Experience with cloud platforms (e.g., AWS, Google Cloud) and serverless architecture.

● Familiarity with version control systems (e.g., Git) and CI/CD pipelines.

● Previous experience in Agile/Scrum methodologies.

● Published apps on the Google Play Store or contributions to open-source projects

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Tony Tom
Posted by Tony Tom
Bengaluru (Bangalore)
2 - 6 yrs
Best in industry
Terraform
skill iconPython
Linux/Unix
Infrastructure
skill iconDocker
+5 more

GCP Cloud Engineer:

  • Proficiency in infrastructure as code (Terraform).
  • Scripting and automation skills (e.g., Python, Shell). Knowing python is must.
  • Collaborate with teams across the company (i.e., network, security, operations) to build complete cloud offerings.
  • Design Disaster Recovery and backup strategies to meet application objectives.
  • Working knowledge of Google Cloud
  • Working knowledge of various tools, open-source technologies, and cloud services
  • Experience working on Linux based infrastructure.
  • Excellent problem-solving and troubleshooting skills


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Bengaluru (Bangalore), Mumbai, Pune
7 - 20 yrs
Best in industry
skill icon.NET
ASP.NET
skill iconC#
Google Cloud Platform (GCP)
Migration

Job Title: .NET Developer with Cloud Migration Experience

Job Description:

We are seeking a skilled .NET Developer with experience in C#, MVC, and ASP.NET to join our team. The ideal candidate will also have hands-on experience with cloud migration projects, particularly in migrating on-premise applications to cloud platforms such as Azure or AWS.

Responsibilities:

  • Develop, test, and maintain .NET applications using C#, MVC, and ASP.NET
  • Collaborate with cross-functional teams to define, design, and ship new features
  • Participate in code reviews and ensure coding best practices are followed
  • Work closely with the infrastructure team to migrate on-premise applications to the cloud
  • Troubleshoot and debug issues that arise during migration and post-migration phases
  • Stay updated with the latest trends and technologies in .NET development and cloud computing

Requirements:

  • Bachelor's degree in Computer Science or related field
  • X+ years of experience in .NET development using C#, MVC, and ASP.NET
  • Hands-on experience with cloud migration projects, preferably with Azure or AWS
  • Strong understanding of cloud computing concepts and principles
  • Experience with database technologies such as SQL Server
  • Excellent problem-solving and communication skills

Preferred Qualifications:

  • Microsoft Azure or AWS certification
  • Experience with other cloud platforms such as Google Cloud Platform (GCP)
  • Familiarity with DevOps practices and tools


Read more
Xcelore
Somya Dhir
Posted by Somya Dhir
Noida, Sector 2
4 - 7 yrs
₹5L - ₹15L / yr
skill iconJava
Hibernate (Java)
Spring Boot
Microservices
AWS Lambda
+2 more

MUST HAVES:

  • #java11, Java 17 & above only
  • #springboot #microservices experience is must
  • #cloud experience is must (AWS or GCP or Azure)
  • Strong understanding of #functionalprogramming and #reactiveprogramming concepts.
  • Experience with asynchronous programming and async frameworks/libraries.
  • Proficiency in #sql databases (MySQL, PostgreSQL, etc.).
  • WFO in NOIDA only.


Other requirements:

  • Knowledge of socket programming and real-time communication protocols.
  • Experience of building complex enterprise grade applications with multiple components and integrations
  • Good coding practices and ability to design solutions
  • Good communication skills
  • Ability to mentor team and give technical guidance
  • #fullstack skills with anyone of #javascript or #reactjs or #angularjs is preferable.
  • Excellent problem-solving skills and attention to detail.
  • Preferred experience with #nosql databases (MongoDB, Cassandra, Redis, etc.).


Read more
Nyteco

at Nyteco

2 candid answers
1 video
Alokha Raj
Posted by Alokha Raj
Remote only
4 - 6 yrs
₹17L - ₹20L / yr
Data Transformation Tool (DBT)
ETL
SQL
Big Data
Google Cloud Platform (GCP)
+2 more

Join Our Journey

Jules develops an amazing end-to-end solution for recycled materials traders, importers and exporters. Which means a looooot of internal, structured data to play with in order to provide reporting, alerting and insights to end-users. With about 200 tables, covering all business processes from order management, to payments including logistics, hedging and claims, the wealth the data entered in Jules can unlock is massive. 


After working on a simple stack made of PostGres, SQL queries and a visualization solution, the company is now ready to set-up its data stack and only misses you. We are thinking DBT, Redshift or Snowlake, Five Tran, Metabase or Luzmo etc. We also have an AI team already playing around text driven data interaction. 


As a Data Engineer at Jules AI, your duties will involve both data engineering and product analytics, enhancing our data ecosystem. You will collaborate with cross-functional teams to design, develop, and sustain data pipelines, and conduct detailed analyses to generate actionable insights.


Roles And Responsibilities:

  • Work with stakeholders to determine data needs, and design and build scalable data pipelines.
  • Develop and sustain ELT processes to guarantee timely and precise data availability for analytical purposes.
  • Construct and oversee large-scale data pipelines that collect data from various sources.
  • Expand and refine our DBT setup for data transformation.
  • Engage with our data platform team to address customer issues.
  • Apply your advanced SQL and big data expertise to develop innovative data solutions.
  • Enhance and debug existing data pipelines for improved performance and reliability.
  • Generate and update dashboards and reports to share analytical results with stakeholders.
  • Implement data quality controls and validation procedures to maintain data accuracy and integrity.
  • Work with various teams to incorporate analytics into product development efforts.
  • Use technologies like Snowflake, DBT, and Fivetran effectively.


Mandatory Qualifications:

  • Hold a Bachelor's or Master's degree in Computer Science, Data Science, or a related field.
  • Possess at least 4 years of experience in Data Engineering, ETL Building, database management, and Data Warehousing.
  • Demonstrated expertise as an Analytics Engineer or in a similar role.
  • Proficient in SQL, a scripting language (Python), and a data visualization tool.
  • Mandatory experience in working with DBT.
  • Experience in working with Airflow, and cloud platforms like AWS, GCP, or Snowflake.
  • Deep knowledge of ETL/ELT patterns.
  • Require at least 1 year of experience in building Data pipelines and leading data warehouse projects.
  • Experienced in mentoring data professionals across all levels, from junior to senior.
  • Proven track record in establishing new data engineering processes and navigating through ambiguity.
  • Preferred Skills: Knowledge of Snowflake and reverse ETL tools is advantageous.


Grow, Develop, and Thrive With Us

  • Global Collaboration: Work with a dynamic team that’s making an impact across the globe, in the recycling industry and beyond. We have customers in India, Singapore, United-States, Mexico, Germany, France and more
  • Professional Growth: a highway toward setting-up a great data team and evolve into a leader
  • Flexible Work Environment: Competitive compensation, performance-based rewards, health benefits, paid time off, and flexible working hours to support your well-being.


Apply to us directly : https://nyteco.keka.com/careers/jobdetails/41442

Read more
Apptware solutions LLP Pune
Pune
6 - 10 yrs
₹9L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+5 more

Company - Apptware Solutions

Location Baner Pune

Team Size - 130+


Job Description -

Cloud Engineer with 8+yrs of experience


Roles and Responsibilities


● Have 8+ years of strong experience in deployment, management and maintenance of large systems on-premise or cloud

● Experience maintaining and deploying highly-available, fault-tolerant systems at scale

● A drive towards automating repetitive tasks (e.g. scripting via Bash, Python, Ruby, etc)

● Practical experience with Docker containerization and clustering (Kubernetes/ECS)

● Expertise with AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda, VPN)

● Version control system experience (e.g. Git)

● Experience implementing CI/CD (e.g. Jenkins, TravisCI, CodePipeline)

● Operational (e.g. HA/Backups) NoSQL experience (e.g. MongoDB, Redis) SQL experience (e.g. MySQL)

● Experience with configuration management tools (e.g. Ansible, Chef) ● Experience with infrastructure-as-code (e.g. Terraform, Cloudformation)

● Bachelor's or master’s degree in CS, or equivalent practical experience

● Effective communication skills

● Hands-on cloud providers like MS Azure and GC

● A sense of ownership and ability to operate independently

● Experience with Jira and one or more Agile SDLC methodologies

● Nice to Have:

○ Sensu and Graphite

○ Ruby or Java

○ Python or Groovy

○ Java Performance Analysis


Role: Cloud Engineer

Industry Type: IT-Software, Software Services

Functional Area: IT Software - Application Programming, Maintenance Employment Type: Full Time, Permanent

Role Category: Programming & Design

Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Pune, Hyderabad, Gurugram, Noida
5 - 11 yrs
₹20L - ₹36L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.


Role & Responsibilities:

Your role is focused on Design, Development and delivery of solutions involving:

• Data Integration, Processing & Governance

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Implement scalable architectural models for data processing and storage

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode

• Build functionality for data analytics, search and aggregation

Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 5+ years of IT experience with 3+ years in Data related technologies

2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc

6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Cloud data specialty and other related Big data technology certifications


Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes


Read more
mazosol
kirthick murali
Posted by kirthick murali
Mumbai
10 - 20 yrs
₹30L - ₹58L / yr
skill iconPython
skill iconR Programming
PySpark
Google Cloud Platform (GCP)
SQL Azure

Data Scientist – Program Embedded 

Job Description:   

We are seeking a highly skilled and motivated senior data scientist to support a big data program. The successful candidate will play a pivotal role in supporting multiple projects in this program covering traditional tasks from revenue management, demand forecasting, improving customer experience to testing/using new tools/platforms such as Copilot Fabric for different purpose. The expected candidate would have deep expertise in machine learning methodology and applications. And he/she should have completed multiple large scale data science projects (full cycle from ideation to BAU). Beyond technical expertise, problem solving in complex set-up will be key to the success for this role. This is a data science role directly embedded into the program/projects, stake holder management and collaborations with patterner are crucial to the success on this role (on top of the deep expertise). 

What we are looking for: 

  1. Highly efficient in Python/Pyspark/R. 
  2. Understand MLOps concepts, working experience in product industrialization (from Data Science point of view). Experience in building product for live deployment, and continuous development and continuous integration. 
  3. Familiar with cloud platforms such as Azure, GCP, and the data management systems on such platform. Familiar with Databricks and product deployment on Databricks. 
  4. Experience in ML projects involving techniques: Regression, Time Series, Clustering, Classification, Dimension Reduction, Anomaly detection with traditional ML approaches and DL approaches. 
  5. Solid background in statistics, probability distributions, A/B testing validation, univariate/multivariate analysis, hypothesis test for different purpose, data augmentation etc. 
  6. Familiar with designing testing framework for different modelling practice/projects based on business needs. 
  7. Exposure to Gen AI tools and enthusiastic about experimenting and have new ideas on what can be done. 
  8. If they have improved an internal company process using an AI tool, that would be great (e.g. process simplification, manual task automation, auto emails) 
  9. Ideally, 10+ years of experience, and have been on independent business facing roles. 
  10. CPG or retail as a data scientist would be nice, but not number one priority, especially for those who have navigated through multiple industries. 
  11. Being proactive and collaborative would be essential. 

 

Some projects examples within the program: 

  1. Test new tools/platforms such as Copilo, Fabric for commercial reporting. Testing, validation and build trust. 
  2. Building algorithms for predicting trend in category, consumptions to support dashboards. 
  3. Revenue Growth Management, create/understand the algorithms behind the tools (can be built by 3rd parties) we need to maintain or choose to improve. Able to prioritize and build product roadmap. Able to design new solutions and articulate/quantify the limitation of the solutions. 
  4. Demand forecasting, create localized forecasts to improve in store availability. Proper model monitoring for early detection of potential issues in the forecast focusing particularly on improving the end user experience. 


Read more
Mumbai
5 - 10 yrs
₹8L - ₹20L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+6 more


Data Scientist – Delivery & New Frontiers Manager 

Job Description:   

We are seeking highly skilled and motivated data scientist to join our Data Science team. The successful candidate will play a pivotal role in our data-driven initiatives and be responsible for designing, developing, and deploying data science solutions that drives business values for stakeholders. This role involves mapping business problems to a formal data science solution, working with wide range of structured and unstructured data, architecture design, creating sophisticated models, setting up operations for the data science product with the support from MLOps team and facilitating business workshops. In a nutshell, this person will represent data science and provide expertise in the full project cycle. Expectation of the successful candidate will be above that of a typical data scientist. Beyond technical expertise, problem solving in complex set-up will be key to the success for this role. 

Responsibilities: 

  • Collaborate with cross-functional teams, including software engineers, product managers, and business stakeholders, to understand business needs and identify data science opportunities. 
  • Map complex business problems to data science problem, design data science solution using GCP/Azure Databricks platform. 
  • Collect, clean, and preprocess large datasets from various internal and external sources.  
  • Streamlining data science process working with Data Engineering, and Technology teams. 
  • Managing multiple analytics projects within a Function to deliver end-to-end data science solutions, creation of insights and identify patterns.  
  • Develop and maintain data pipelines and infrastructure to support the data science projects 
  • Communicate findings and recommendations to stakeholders through data visualizations and presentations. 
  • Stay up to date with the latest data science trends and technologies, specifically for GCP companies 

 

Education / Certifications:  

Bachelor’s or Master’s in Computer Science, Engineering, Computational Statistics, Mathematics. 

Job specific requirements:  

  • Brings 5+ years of deep data science experience 

∙       Strong knowledge of machine learning and statistical modeling techniques in a in a clouds-based environment such as GCP, Azure, Amazon 

  • Experience with programming languages such as Python, R, Spark 
  • Experience with data visualization tools such as Tableau, Power BI, and D3.js 
  • Strong understanding of data structures, algorithms, and software design principles 
  • Experience with GCP platforms and services such as Big Query, Cloud ML Engine, and Cloud Storage 
  • Experience in configuring and setting up the version control on Code, Data, and Machine Learning Models using GitHub. 
  • Self-driven, be able to work with cross-functional teams in a fast-paced environment, adaptability to the changing business needs. 
  • Strong analytical and problem-solving skills 
  • Excellent verbal and written communication skills 
  • Working knowledge with application architecture, data security and compliance team. 


Read more
admedia
Ashita John
Posted by Ashita John
Remote only
6 - 16 yrs
₹15L - ₹25L / yr
skill iconJava
Hibernate (Java)
Spring
Apache
MySQL
+7 more

Responsibilities:

 Develop and maintain high-quality, scalable, and efficient Java codebase for our ad-serving platform.

 Collaborate with cross-functional teams including product managers, designers, and other developers to

understand requirements and translate them into technical solutions.

 Design and implement new features and functionalities in the ad-serving system, focusing on performance

optimization and reliability.

 Troubleshoot and debug complex issues in the ad server environment, providing timely resolutions to ensure

uninterrupted service.

 Conduct code reviews, provide constructive feedback, and enforce coding best practices to maintain code quality

and consistency across the platform.

 Stay updated with emerging technologies and industry trends in ad serving and digital advertising, and integrate

relevant innovations into our platform.

 Work closely with DevOps and infrastructure teams to deploy and maintain the ad-serving platform in a cloud- based environment.

 Collaborate with stakeholders to gather requirements, define technical specifications, and estimate development

efforts for new projects and features.

 Mentor junior developers, sharing knowledge and best practices to foster a culture of continuous learning and

improvement within the development team.

 Participate in on-call rotations and provide support for production issues as needed, ensuring maximum uptime

and reliability of the ad-serving platform.

Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Noida
4 - 10 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.


Role & Responsibilities:

Job Title: Senior Associate L1 – Data Engineering

Your role is focused on Design, Development and delivery of solutions involving:

• Data Ingestion, Integration and Transformation

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time

• Build functionality for data analytics, search and aggregation


Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies

2.Minimum 1.5 years of experience in Big Data technologies

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security

7.Cloud data specialty and other related Big data technology certifications


Job Title: Senior Associate L1 – Data Engineering

Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes

Read more
Astra Security

at Astra Security

2 candid answers
1 video
Human Resources
Posted by Human Resources
Remote only
2 - 4 yrs
₹10L - ₹19L / yr
skill iconGo Programming (Golang)
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
RESTful APIs
SaaS
+12 more

About us

Astra is a cyber security SaaS company that makes otherwise chaotic pentests a breeze with its one of a kind Pentest Platform. Astra's continuous vulnerability scanner emulates hacker behavior to scan applications for 8300+ security tests. CTOs & CISOs love Astra because it helps them fix vulnerabilities in record time and move from DevOps to DevSecOps with Astra's CI/CD integrations.


Astra is loved by 650+ companies across the globe. In 2023 Astra uncovered 2 million+ vulnerabilities for its customers, saving customers $69M+ in potential losses due to security vulnerabilities. 


We've been awarded by the President of France Mr. François Hollande at the La French Tech program and Prime Minister of India Shri Narendra Modi at the Global Conference on Cyber Security. Loom, MamaEarth, Muthoot Finance, Canara Robeco, ScripBox etc. are a few of Astra’s customers.


Role Overview

As an SDE 2 Back-end Engineer at Astra, you will play a crucial role in the development of a new vulnerability scanner from scratch. You will be architecting & engineering a scalable technical solution from the ground-up.

You will have the opportunity to work alongside talented individuals, collaborating to deliver innovative solutions and pushing the boundaries of what's possible in vulnerability scanning. The role requires deep collaboration with the founders, product, engineering & security teams.

Join our team and contribute to the development of a cutting-edge SaaS security platform, where high-quality engineering and continuous learning are at the core of everything we do.


Roles & Responsibilities:


  • You will be joining our Vulnerability Scanner team which builds a security engine to identify vulnerabilities in technical infrastructure.
  • You will be the technical product owner of the scanner, which would involve managing a lean team of backend engineers to ensure smooth implementation of the technical product roadmap.
  • Research about security vulnerabilities, CVEs, and zero-days affecting cloud/web/API infrastructure.
  • Work in an agile environment of engineers to architect, design, develop and build our microservice infrastructure.
  • You will research, design, code, troubleshoot and support (on-call). What you create is also what you own.
  • Writing secure, high quality, modular, testable & well documented code for features outlined in every sprint.
  • Design and implement APIs in support of other services with a highly scalable, flexible, and secure backend using GoLang
  • Hands-on experience with creating production-ready code & optimizing it by identifying and correcting bottlenecks.
  • Driving strict code review standards among the team.
  • Ensuring timely delivery of the features/products
  • Working with product managers to ensure product delivery status is transparent & the end product always looks like how it was imagined
  • Work closely with Security & Product teams in writing vulnerability detection rules, APIs etc.


Required Qualifications & Skills: 


  • Strong 2-4 years relevant development experience in GoLang
  • Experience in building a technical product from idea to production.
  • Design and build highly scalable and maintainable systems in Golang
  • Expertise in Goroutines and Channels to write efficient code utilizing multi-core CPU optimally
  • Must have hands-on experience with managing AWS/Google Cloud infrastructure
  • Hands on experience in creating low latency high throughput REST APIs
  • Write test suites and maintain code coverage above 80%
  • Working knowledge of PostgreSQL, Redis, Kafka
  • Good to have experience in Docker, Kubernetes, Kafka
  • Good understanding of Data Structures, Algorithms and Operating Systems.
  • Understanding of cloud/web security concepts would be an added advantage


What We Offer:


  • Adrenalin rush of being a part of a fast-growing company
  • Fully remote & agile working environment
  • A wholesome opportunity in a fast-paced environment where you get to build things from scratch, improve and influence product design decisions
  • Holistic understanding of SaaS and enterprise security business
  • Opportunity to engage and collaborate with developers globally
  • Experience with security side of things
  • Annual trips to beaches or mountains (last one was Chikmangaluru)
  • Open and supportive culture 
Read more
FindingPi Inc

at FindingPi Inc

1 recruiter
Mrinmayee Bandopadhyay
Posted by Mrinmayee Bandopadhyay
ShivajiNager
4 - 6 yrs
₹6L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

About the role:

 

We are seeking a highly skilled Azure DevOps Engineer with a strong background in backend development to join our rapidly growing team. The ideal candidate will have a minimum of 4 years of experience and has have extensive experience in building and maintaining CI/CD pipelines, automating deployment processes, and optimizing infrastructure on Azure. Additionally, expertise in backend technologies and development frameworks is required to collaborate effectively with the development team in delivering scalable and efficient solutions.

 

Responsibilities

 

  • Collaborate with development and operations teams to implement continuous integration and deployment processes.
  • Automate infrastructure provisioning, configuration management, and application deployment using tools such as Ansible, and Jenkins.
  • Design, implement, and maintain Azure DevOps pipelines for continuous integration and continuous delivery (CI/CD)
  • Develop and maintain build and deployment pipelines, ensuring that they are scalable, secure, and reliable.
  • Monitor and maintain the health of the production infrastructure, including load balancers, databases, and application servers.
  • Automate the software development and delivery lifecycle, including code building, testing, deployment, and release.
  • Familiarity with Azure CLI, Azure REST APIs, Azure Resource Manager template, Azure billing/cost management and the Azure Management Console
  • Must have experience of any one of the programming language (Java, .Net, Python )
  • Ensure high availability of the production environment by implementing disaster recovery and business continuity plans.
  • Build and maintain monitoring, alerting, and trending operational tools (CloudWatch, New Relic, Splunk, ELK, Grafana, Nagios).
  • Stay up to date with new technologies and trends in DevOps and make recommendations for improvements to existing processes and infrastructure.
  • ontribute to backend development projects, ensuring robust and scalable solutions.
  • Work closely with the development team to understand application requirements and provide technical expertise in backend architecture.
  • Design and implement database schemas.
  • Identify and implement opportunities for performance optimization and scalability of backend systems.
  • Participate in code reviews, architectural discussions, and sprint planning sessions.
  • Stay updated with the latest Azure technologies, tools, and best practices to continuously improve our development and deployment processes.
  •  
  • Mentor junior team members and provide guidance and training on best practices in DevOps.

 

 

Required Qualifications

  • BS/MS in Computer Science, Engineering, or a related field
  • 4+ years of experience as an Azure DevOps Engineer (or similar role) with experience in backed development.
  • Strong understanding of CI/CD principles and practices.
  • Expertise in Azure DevOps services, including Azure Pipelines, Azure Repos, and Azure Boards.
  • Experience with infrastructure automation tools like Terraform or Ansible.
  • Proficient in scripting languages like PowerShell or Python.
  • Experience with Linux and Windows server administration.
  • Strong understanding of backend development principles and technologies.
  • Excellent communication and collaboration skills.
  • Ability to work independently and as part of a team.
  • Problem-solving and analytical skills.
  • Experience with industry frameworks and methodologies: ITIL/Agile/Scrum/DevOps
  • Excellent problem-solving, critical thinking, and communication skills.
  • Have worked in a product based company.

 

What we offer:

  • Competitive salary and benefits package
  • Opportunity for growth and advancement within the company
  • Collaborative, dynamic, and fun work environment
  • Possibility to work with cutting-edge technologies and innovative projects


Read more
ToXSL Technologies Pvt Ltd

at ToXSL Technologies Pvt Ltd

1 video
1 recruiter
Parul Kapoor
Posted by Parul Kapoor
Mohali, Chandigarh
2 - 4 yrs
₹4L - ₹8L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

Requirements :

  •   Good knowledge of Linux Ubuntu.
  •   Knowledge of general networking practices/protocols / administrative tasks.
  •   Adding, removing, or updating user account information, resetting passwords, etc.
  •   Scripting to ensure operations automation and data gathering is accomplished seamlessly.
  •   Ability to work cooperatively with software developers, testers, and database administrators.
  •   Experience with software version control system (GIT) and CI.
  •   Knowledge of Web server Apache, Nginx etc
  •    E-Mail servers based on Postfix and Dovecot.
  •   Understanding of docker and Kubernetes Containers.
  •    IT hardware, Linux Server, System Admin, Server administrator

 

Highlights:

  • Working 5 days a week.
  • Group Health Insurance for our employees.
  • Work with a team of 300+ excellent engineers.
  • Extra Compensation for Night Shifts.
  • Additional Salary for an extra day spent in the office.
  • Lunch buffets for all our employees.
  • Fantastic Friday meals to dine in for employees.
  • Yearly and quarterly awards with CASH amount, Birthday celebration, Dinner coupons etc.
  • Team Dinners on Project Completion.
  • Festival celebration, Month End celebration.


Read more
Arting Digital
Pragati Bhardwaj
Posted by Pragati Bhardwaj
Gurugram
3 - 10 yrs
₹12L - ₹14L / yr
B2B Marketing
Sales
Cloud Computing
end to end sales
Google Cloud Platform (GCP)
+2 more

Job Title-  Cloud Sales Specialist


CTC- 12-14Lpa

Location-  Gurgaon

Experience-  6+Years

Working Mode- Work from Office

Critical Skills- Good communication skills,Cloud Sales, B2B sales

Qualification-  Any Engineering/Computer degree



The Cloud Sales Specialist is responsible for achieving revenue targets and ensuring on-time collections for the assigned cloud products/services (for example ,Azure, AWS, GCP) in the respective location(s). The role holder is responsible for the effective management of the sales funnel, execution of marketing activities, and coordination of channel partner enablement initiatives for the assigned product/services. Building and maintaining strong professional relationships with vendor and channel partner representatives is critical to the role.


Responsibilities:


  • Responsible for achieving revenue targets (quarterly, annual) through effective sales funnel management for the assigned products/services in the respective location(s)
  • Be responsible for on-time collections from channel partners and execution of marketing activities for the assigned products/services
  • Build and maintain relationships with vendor representatives and channel partners for the assigned products/services
  • Responsible for MIS, reports generation, documentation, and compliance for sales, collection, and channel enablement activities, as per guidelines



Requirements:


  • Must have experience in Cloud Sales and B2B Sales.
  • Field Sales would be an added advantage.
  • Experience of around 3 to 5 years in the sales function in IT Distribution, preferably in cloud solutions
  • Should possess an understanding of the sales, distribution, and channel management process for cloud solutions -
  • Good Product knowledge of Cloud Solution offerings
  • Cloud sales certification with one or more vendors is mandatory (for example, Amazon, Oracle, GCP)
  • Should possess excellent interpersonal and communication skills
  • Should be able to build strong relations with key stakeholders


Read more
Chennai, Coimbatore
6 - 10 yrs
₹10L - ₹25L / yr
skill iconJava
J2EE
skill iconSpring Boot
Hibernate (Java)
skill iconAmazon Web Services (AWS)
+2 more

Role & responsibilities

  • Senior Java developer with 6 to 10 years of experience having worked on Java, SpringBoot, Hibernate, Microservices, Redis, AWS S3
  • Contribute to all stages of the software development lifecycle
  • Design, implement, and maintain Java-based applications that can be high-volume and low-latency
  • Analyze user requirements to define business objectives
  • Envisioning system features and functionality
  • Define application objectives and functionality
  • Ensure application designs conform to business goals
  • Develop and test software
  • Should have good experience in Code Review
  • Expecting to be 100% hands-on while working with the clients directly
  • Performing requirement analysis
  • Developing high-quality and detailed designs
  • Conducting unit testing using automated unit test frameworks
  • Identifying risk and conducting mitigation action planning
  • Reviewing the work of other developers and providing feedback
  • Using coding standards and best practices to ensure quality
  • Communicating with customers to resolve issues
  • Good Communication Skills 


Read more
A LEADING US BASED MNC

A LEADING US BASED MNC

Agency job
via Zeal Consultants by Zeal Consultants
Bengaluru (Bangalore), Hyderabad, Delhi, Gurugram
5 - 10 yrs
₹14L - ₹15L / yr
Google Cloud Platform (GCP)
Spark
PySpark
Apache Spark
"DATA STREAMING"

Data Engineering : Senior Engineer / Manager


As Senior Engineer/ Manager in Data Engineering, you will translate client requirements into technical design, and implement components for a data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution.


Must Have skills :


1. GCP


2. Spark streaming : Live data streaming experience is desired.


3. Any 1 coding language: Java/Pyhton /Scala



Skills & Experience :


- Overall experience of MINIMUM 5+ years with Minimum 4 years of relevant experience in Big Data technologies


- Hands-on experience with the Hadoop stack - HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.


- Strong experience in at least of the programming language Java, Scala, Python. Java preferable


- Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc.


- Well-versed and working knowledge with data platform related services on GCP


- Bachelor's degree and year of work experience of 6 to 12 years or any combination of education, training and/or experience that demonstrates the ability to perform the duties of the position


Your Impact :


- Data Ingestion, Integration and Transformation


- Data Storage and Computation Frameworks, Performance Optimizations


- Analytics & Visualizations


- Infrastructure & Cloud Computing


- Data Management Platforms


- Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time


- Build functionality for data analytics, search and aggregation

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort