50+ Google Cloud Platform (GCP) Jobs in India
Apply to 50+ Google Cloud Platform (GCP) Jobs on CutShort.io. Find your next job, effortlessly. Browse Google Cloud Platform (GCP) Jobs and apply today!


Role: GCP Data Engineer
Notice Period: Immediate Joiners
Experience: 5+ years
Location: Remote
Company: Deqode
About Deqode
At Deqode, we work with next-gen technologies to help businesses solve complex data challenges. Our collaborative teams build reliable, scalable systems that power smarter decisions and real-time analytics.
Key Responsibilities
- Build and maintain scalable, automated data pipelines using Python, PySpark, and SQL.
- Work on cloud-native data infrastructure using Google Cloud Platform (BigQuery, Cloud Storage, Dataflow).
- Implement clean, reusable transformations using DBT and Databricks.
- Design and schedule workflows using Apache Airflow.
- Collaborate with data scientists and analysts to ensure downstream data usability.
- Optimize pipelines and systems for performance and cost-efficiency.
- Follow best software engineering practices: version control, unit testing, code reviews, CI/CD.
- Manage and troubleshoot data workflows in Linux environments.
- Apply data governance and access control via Unity Catalog or similar tools.
Required Skills & Experience
- Strong hands-on experience with PySpark, Spark SQL, and Databricks.
- Solid understanding of GCP services (BigQuery, Cloud Functions, Dataflow, Cloud Storage).
- Proficiency in Python for scripting and automation.
- Expertise in SQL and data modeling.
- Experience with DBT for data transformations.
- Working knowledge of Airflow for workflow orchestration.
- Comfortable with Linux-based systems for deployment and troubleshooting.
- Familiar with Git for version control and collaborative development.
- Understanding of data pipeline optimization, monitoring, and debugging.
Position: Project Manager
Location: Bengaluru, India (Hybrid/Remote flexibility available)
Company: PGAGI Consultancy Pvt. Ltd
About PGAGI
At PGAGI, we are building the future where human and artificial intelligence coexist to solve complex problems, accelerate innovation, and power sustainable growth. We develop and deploy advanced AI solutions across industries, making AI not just a tool but a transformational force for businesses and society.
Position Summary
PGAGI is seeking a dynamic and experienced Project Manager to lead cross-functional engineering teams and drive the successful execution of multiple AI/ML-centric projects. The ideal candidate is a strategic thinker with a solid background in engineering-led product/project management, especially in AI/ML product lifecycles. This role is crucial to scaling our technical operations, ensuring seamless collaboration, timely delivery, and high-impact results across initiatives.
Key Responsibilities
• Lead Engineering Teams Across AI/ML Projects: Manage and mentor cross-functional teams of ML engineers, DevOps professionals, and software developers through agile delivery cycles, ensuring timely and high-quality execution of AI-focused initiatives.
• Drive Agile Project Execution: Define project scope, objectives, timelines, and deliverables using Agile/Scrum methodologies. Ensure continuous sprint planning, backlog grooming, and milestone tracking via tools like Jira or GitHub Projects.
• Manage Multiple Concurrent Projects: Oversee the full lifecycle of multiple high-priority projects—ranging from AI model development and infrastructure integration to client delivery and platform enhancements.
• Collaborate with Technical and Business Stakeholders: Act as the bridge between engineering, research, and client-facing teams, translating complex requirements into actionable tasks and product features.
• Maintain Engineering and Infrastructure Quality: Uphold rigorous engineering standards across deployments. Coordinate testing, model performance validation, version control, and CI/CD operations.
• Budget and Resource Allocation: Optimize resource distribution across teams, track project costs, and ensure effective use of cloud infrastructure and personnel to maximize project ROI.
• Risk Management & Mitigation: Identify risks proactively across technical and operational layers. Develop mitigation plans and troubleshoot issues that may impact timelines or performance.
• Monitor KPIs and Delivery Metrics: Establish and monitor performance indicators such as sprint velocity, deployment frequency, incident response times, and customer satisfaction for each release.
• Support Continuous Improvement: Foster a culture of feedback and iteration. Champion retrospectives and process reviews to continually refine development practices and workflows.
Qualifications:
• Education: Bachelor’s or Master’s in Computer Science, Engineering, or a related technical field.
• Experience: Minimum 5 years of experience as a Project Manager, with at least 2 years managing AI/ML or software engineering teams.
• Tech Expertise: Familiarity with AI/ML lifecycles, cloud platforms (AWS, GCP, or Azure), and DevOps pipelines (Docker, Kubernetes, GitHub Actions, Jenkins).
• Tools: Strong experience with Jira, Confluence, and project tracking/reporting tools.
• Leadership: Proven success leading high-performing engineering teams in a fast-paced, innovative environment.
• Communication: Excellent written and verbal skills to interface with both technical and non-technical stakeholders.
• Certifications (Preferred): PMP, CSM, or certifications in AI/ML project management or cloud technologies.
Why Join PGAGI?
• Lead cutting-edge AI/ML product teams building scalable, impactful solutions.
• Be part of a fast-growing, innovation-driven startup environment.
• Enjoy a collaborative, intellectually stimulating workplace with growth opportunities.
• Competitive compensation and performance-based rewards.
• Access to learning resources, mentoring, and AI/DevOps communities.
A backend developer is an engineer who can handle all the work of databases, servers,
systems engineering, and clients. Depending on the project, what customers need may
be a mobile stack, a Web stack, or a native application stack.
You will be responsible for:
Build reusable code and libraries for future use.
Own & build new modules/features end-to-end independently.
Collaborate with other team members and stakeholders.
Required Skills :
Thorough understanding of Node.js and Typescript.
Excellence in at least one framework like strongloop loopback, express.js, sail.js, etc.
Basic architectural understanding of modern day web applications
Diligence for coding standards
Must be good with git and git workflow
Experience of external integrations is a plus
Working knowledge of AWS or GCP or Azure - Expertise with linux based systems
Experience with CI/CD tools like jenkins is a plus.
Experience with testing and automation frameworks.
Extensive understanding of RDBMS systems

Job Summary:
We are looking for a motivated and detail-oriented Data Engineer with 1–2 years of experience to join our data engineering team. The ideal candidate should have solid foundational skills in SQL and Python, along with exposure to building or maintaining data pipelines. You’ll play a key role in helping to ingest, process, and transform data to support various business and analytical needs.
Key Responsibilities:
- Assist in the design, development, and maintenance of scalable and efficient data pipelines.
- Write clean, maintainable, and performance-optimized SQL queries.
- Develop data transformation scripts and automation using Python.
- Support data ingestion processes from various internal and external sources.
- Monitor data pipeline performance and help troubleshoot issues.
- Collaborate with data analysts, data scientists, and other engineers to ensure data quality and consistency.
- Work with cloud-based data solutions and tools (e.g., AWS, Azure, GCP – as applicable).
- Document technical processes and pipeline architecture.
Core Skills Required:
- Proficiency in SQL (data querying, joins, aggregations, performance tuning).
- Experience with Python, especially in the context of data manipulation (e.g., pandas, NumPy).
- Exposure to ETL/ELT pipelines and data workflow orchestration tools (e.g., Airflow, Prefect, Luigi – preferred).
- Understanding of relational databases and data warehouse concepts.
- Familiarity with version control systems like Git.
Preferred Qualifications:
- Experience with cloud data services (AWS S3, Redshift, Azure Data Lake, etc.)
- Familiarity with data modeling and data integration concepts.
- Basic knowledge of CI/CD practices for data pipelines.
- Bachelor’s degree in Computer Science, Engineering, or related field.

Role description:
You will be building curated enterprise grade solutions for GenAI application deployment at a production scale for clients. Solid understanding and hands on skills for GenAI application deployment that includes development and engineering skills. The role requires development and engineering skills on GenAI application development including data ingestion, choosing the right fit LLMs, simple and advanced RAG, guardrails, prompt engineering for optimisation, traceability, security, LLM evaluation, observability, and deployment at scale on cloud or on premise. As this space evolves very rapidly, candidates must also demonstrate knowledge on agentic AI frameworks. Candidates having strong background on ML with engineering skills is highly preferred for LLMOps role.
Required skills:
- 4-8 years of experience in working on ML projects that includes business requirement gathering, model development, training, deployment at scale and monitoring model performance for production use cases
- Strong knowledge on Python, NLP, Data Engineering, Langchain, Langtrace, Langfuse, RAGAS, AgentOps (optional)
- Should have worked on proprietary and open source large language models
- Experience on LLM fine tuning, creating distilled model from hosted LLMs
- Building data pipelines for model training
- Experience on model performance tuning, RAG, guardrails, prompt engineering, evaluation and observability
- Experience in GenAI application deployment on cloud and on-premise at scale for production
- Experience in creating CI/CD pipelines
- Working knowledge on Kubernetes
- Experience in minimum one cloud: AWS / GCP / Azure to deploy AI services
- Experience in creating workable prototypes using Agentic AI frameworks like CrewAI, Taskweaver, AutoGen
- Experience in light weight UI development using streamlit or chainlit (optional)
- Desired experience on open-source tools for ML development, deployment, observability and integration
- Background on DevOps and MLOps will be a plus
- Experience working on collaborative code versioning tools like GitHub/GitLab
- Team player with good communication and presentation skills
Job Title: IT Head – Fintech Industry
Department: Information Technology
Location: Andheri East
Reports to: COO
Job Type: Full-Time
Job Overview:
The IT Head in a fintech company is responsible for overseeing the entire information technology infrastructure, including the development, implementation, and maintenance of IT systems, networks, and software solutions. The role involves leading the IT team, managing technology projects, ensuring data security, and ensuring the smooth functioning of all technology operations. As the company scales, the IT Head will play a key role in enabling digital innovation, optimizing IT processes, and ensuring compliance with relevant regulations in the fintech sector.
Key Responsibilities:
1. IT Strategy and Leadership
- Develop and execute the company’s IT strategy to align with the organization’s overall business goals and objectives, ensuring the integration of new technologies and systems.
- Lead, mentor, and manage a team of IT professionals, setting clear goals, priorities, and performance expectations.
- Stay up-to-date with industry trends and emerging technologies, providing guidance and recommending innovations to improve efficiency and security.
- Oversee the design, implementation, and maintenance of IT systems that support fintech products, customer experience, and business operations.
2. IT Infrastructure Management
- Oversee the management and optimization of the company’s IT infrastructure, including servers, networks, databases, and cloud services.
- Ensure the scalability and reliability of IT systems to support the company’s growth and increasing demand for digital services.
- Manage system updates, hardware procurement, and vendor relationships to ensure that infrastructure is cost-effective, secure, and high-performing.
3. Cybersecurity and Data Protection
- Lead efforts to ensure the company’s IT infrastructure is secure, implementing robust cybersecurity measures to protect sensitive customer data, financial transactions, and intellectual property.
- Develop and enforce data protection policies and procedures to ensure compliance with data privacy regulations (e.g., GDPR, CCPA, RBI, etc.).
- Conduct regular security audits and vulnerability assessments, working with the security team to address potential risks proactively.
4. Software Development and Integration
- Oversee the development and deployment of software applications and tools that support fintech operations, including payment gateways, loan management systems, and customer engagement platforms.
- Collaborate with product teams to identify technological needs, integrate new features, and optimize existing products for improved performance and user experience.
- Ensure the seamless integration of third-party platforms, APIs, and fintech partners into the company’s core systems.
5. IT Operations and Support
- Ensure the efficient day-to-day operation of IT services, including helpdesk support, system maintenance, and troubleshooting.
- Establish service level agreements (SLAs) for IT services, ensuring that internal teams and customers receive timely support and issue resolution.
- Manage incident response, ensuring quick resolution of system failures, security breaches, or service interruptions.
6. Budgeting and Cost Control
- Manage the IT department’s budget, ensuring cost-effective spending on technology, software, hardware, and IT services.
- Analyze and recommend investments in new technologies and infrastructure that can improve business performance while optimizing costs.
- Ensure the efficient use of IT resources and the appropriate allocation of budget to support business priorities.
7. Compliance and Regulatory Requirements
- Ensure IT practices comply with relevant industry regulations and standards, such as financial services regulations, data privacy laws, and cybersecurity guidelines.
- Work with legal and compliance teams to ensure that all systems and data handling procedures meet industry-specific regulatory requirements (e.g., PCI DSS, ISO 27001).
- Provide input and guidance on IT-related regulatory audits and assessments, ensuring the organization is always in compliance.
8. Innovation and Digital Transformation
- Drive innovation by identifying opportunities for digital transformation within the organization, using technology to streamline operations and enhance the customer experience.
- Collaborate with other departments (marketing, customer service, product development) to introduce new fintech products and services powered by cutting-edge technology.
- Oversee the implementation of AI, machine learning, and other advanced technologies to enhance business performance, operational efficiency, and customer satisfaction.
9. Vendor and Stakeholder Management
- Manage relationships with external technology vendors, service providers, and consultants to ensure the company gets the best value for its investments.
- Negotiate contracts, terms of service, and service level agreements (SLAs) with vendors and technology partners.
- Ensure strong communication with business stakeholders, understanding their IT needs and delivering technology solutions that align with company objectives.
Qualifications and Skills:
Education:
- Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field (Master’s degree or relevant certifications like ITIL, PMP, or CISSP are a plus).
Experience:
- 8-12 years of experience in IT management, with at least 4 years in a leadership role, preferably within the fintech, banking, or technology industry.
- Strong understanding of IT infrastructure, cloud computing, database management, and cybersecurity best practices.
- Proven experience in managing IT teams and large-scale IT projects, especially in fast-paced, growth-driven environments.
- Knowledge of fintech products and services, including digital payments, blockchain, and online lending platforms.
Skills:
- Expertise in IT infrastructure management, cloud services (AWS, Azure, Google Cloud), and enterprise software.
- Strong understanding of cybersecurity protocols, data protection laws, and IT governance frameworks.
- Experience with software development and integration, particularly for fintech platforms.
- Strong project management and budgeting skills, with a track record of delivering IT projects on time and within budget.
- Excellent communication and leadership skills, with the ability to manage cross-functional teams and communicate complex technical concepts to non-technical stakeholders.
- Ability to manage multiple priorities in a fast-paced, high-pressure environment.
Role & Responsibilities
Responsible for ensuring that the architecture and design of the platform remains top-notch with respect to scalability, availability, reliability and maintainability
Act as a key technical contributor as well as a hands-on contributing member of the team.
Own end-to-end availability and performance of features, driving rapid product innovation while ensuring a reliable service.
Working closely with the various stakeholders like Program Managers, Product Managers, Reliability and Continuity Engineering(RCE) team, QE team to estimate and execute features/tasks independently.
Maintain and drive tech backlog execution for non-functional requirements of the platform required to keep the platform resilient
Assist in release planning and prioritization based on technical feasibility and engineering constraints
A zeal to continually find new ways to improve architecture, design and ensure timely delivery and high quality.
1. GCP - GCS, PubSub, Dataflow or DataProc, Bigquery, BQ optimization, Airflow/Composer, Python(preferred)/Java
2. ETL on GCP Cloud - Build pipelines (Python/Java) + Scripting, Best Practices, Challenges
3. Knowledge of Batch and Streaming data ingestion, build End to Data pipelines on GCP
4. Knowledge of Databases (SQL, NoSQL), On-Premise and On-Cloud, SQL vs No SQL, Types of No-SQL DB (At Least 2 databases)
5. Data Warehouse concepts - Beginner to Intermediate level
6.Data Modeling, GCP Databases, DB Schema(or similar)
7.Hands-on data modelling for OLTP and OLAP systems
8.In-depth knowledge of Conceptual, Logical and Physical data modelling
9.Strong understanding of Indexing, partitioning, data sharding, with practical experience of having done the same
10.Strong understanding of variables impacting database performance for near-real-time reporting and application interaction.
11.Should have working experience on at least one data modelling tool,
preferably DBSchema, Erwin
12Good understanding of GCP databases like AlloyDB, CloudSQL, and
BigQuery.
13.People with functional knowledge of the mutual fund industry will be a plus Should be willing to work from Chennai, office presence is mandatory
Role & Responsibilities:
● Work with business users and other stakeholders to understand business processes.
● Ability to design and implement Dimensional and Fact tables
● Identify and implement data transformation/cleansing requirements
● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data from various systems to the Enterprise Data Warehouse
● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical data definitions
● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique
● Provide research, high-level design, and estimates for data transformation and data integration from source applications to end-user BI solutions.
● Provide production support of ETL processes to ensure timely completion and availability of data in the data warehouse for reporting use.
● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate, design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and data quality.
● Work collaboratively with key stakeholders to translate business information needs into well-defined data requirements to implement the BI solutions.
● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into reporting & analytics.
● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions.
● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers, quality issues, and continuously validate reports, dashboards and suggest improvements.
● Train business end-users, IT analysts, and developers.


About the Company – Gruve
Gruve is an innovative software services startup dedicated to empowering enterprise customers in managing their Data Life Cycle. We specialize in Cybersecurity, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence.
As a well-funded early-stage startup, we offer a dynamic environment, backed by strong customer and partner networks. Our mission is to help customers make smarter decisions through data-driven business strategies.
Why Gruve
At Gruve, we foster a culture of:
- Innovation, collaboration, and continuous learning
- Diversity and inclusivity, where everyone is encouraged to thrive
- Impact-focused work — your ideas will shape the products we build
We’re an equal opportunity employer and encourage applicants from all backgrounds. We appreciate all applications, but only shortlisted candidates will be contacted.
Position Summary
We are seeking a highly skilled Software Engineer to lead the development of an Infrastructure Asset Management Platform. This platform will assist infrastructure teams in efficiently managing and tracking assets for regulatory audit purposes.
You will play a key role in building a comprehensive automation solution to maintain a real-time inventory of critical infrastructure assets.
Key Responsibilities
- Design and develop an Infrastructure Asset Management Platform for tracking a wide range of assets across multiple environments.
- Build and maintain automation to track:
- Physical Assets: Servers, power strips, racks, DC rooms & buildings, security cameras, network infrastructure.
- Virtual Assets: Load balancers (LTM), communication equipment, IPs, virtual networks, VMs, containers.
- Cloud Assets: Public cloud services, process registry, database resources.
- Collaborate with infrastructure teams to understand asset-tracking requirements and convert them into technical implementations.
- Optimize performance and scalability to handle large-scale asset data in real-time.
- Document system architecture, implementation, and usage.
- Generate reports for compliance and auditing.
- Ensure integration with existing systems for streamlined asset management.
Basic Qualifications
- Bachelor’s or Master’s degree in Computer Science or a related field
- 3–6 years of experience in software development
- Strong proficiency in Golang and Python
- Hands-on experience with public cloud infrastructure (AWS, GCP, Azure)
- Deep understanding of automation solutions and parallel computing principles
Preferred Qualifications
- Excellent problem-solving skills and attention to detail
- Strong communication and teamwork skills


About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 45 years in the USA. Data Axle has set up a strategic global center of excellence in Pune. This center delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases. Data Axle is headquartered in Dallas, TX, USA.
Roles and Responsibilities:
- Design, implement, and manage scalable analytical data infrastructure, enabling efficient access to large datasets and high-performance computing on Google Cloud Platform (GCP).
- Develop and optimize data pipelines using GCP-native services like BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Data Fusion, and Cloud Storage.
- Work with diverse data sources to extract, transform, and load data into enterprise-grade data lakes and warehouses, ensuring high availability and reliability.
- Implement and maintain real-time data streaming solutions using Pub/Sub, Dataflow, and Kafka.
- Research and integrate the latest big data and visualization technologies to enhance analytics capabilities and improve efficiency.
- Collaborate with cross-functional teams to implement machine learning models and AI-driven analytics solutions using Vertex AI and BigQuery ML.
- Continuously improve existing data architectures to support scalability, performance optimization, and cost efficiency.
- Enhance data security and governance by implementing industry best practices for access control, encryption, and compliance.
- Automate and optimize data workflows to simplify reporting, dashboarding, and self-service analytics using Looker and Data Studio.
Basic Qualifications
- 7+ years of experience in data engineering, software development, business intelligence, or data science, with expertise in large-scale data processing and analytics.
- Strong proficiency in SQL and experience with BigQuery for data warehousing.
- Hands-on experience in designing and developing ETL/ELT pipelines using GCP services (Cloud Composer, Dataflow, Dataproc, Data Fusion, or Apache Airflow).
- Expertise in distributed computing and big data processing frameworks, such as Apache Spark, Hadoop, or Flink, particularly within Dataproc and Dataflow environments.
- Experience with business intelligence and data visualization tools, such as Looker, Tableau, or Power BI.
- Knowledge of data governance, security best practices, and compliance requirements in cloud environments.
Preferred Qualifications:
- Degree/Diploma in Computer Science, Engineering, Mathematics, or a related technical field.
- Experience working with GCP big data technologies, including BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud SQL.
- Hands-on experience with real-time data processing frameworks, including Kafka and Apache Beam.
- Proficiency in Python, Java, or Scala for data engineering and pipeline development.
- Familiarity with DevOps best practices, CI/CD pipelines, Terraform, and infrastructure-as-code for managing GCP resources.
- Experience integrating AI/ML models into data workflows, leveraging BigQuery ML, Vertex AI, or TensorFlow.
- Understanding of Agile methodologies, software development life cycle (SDLC), and cloud cost optimization strategies.


Full Stack Developer
Location: Hyderabad
Experience: 7+ Years
Type: BCS - Business Consulting Services
RESPONSIBILITIES:
* Strong programming skills in Node JS [ Must] , React JS, Android and Kotlin [Must]
* Hands on Experience in UI development with good UX sense understanding.
• Hands on Experience in Database design and management
• Hands on Experience to create and maintain backend-framework for mobile applications.
• Hands-on development experience on cloud-based platforms like GCP/Azure/AWS
• Ability to manage and provide technical guidance to the team.
• Strong experience in designing APIs using RAML, Swagger, etc.
• Service Definition Development.
• API Standards, Security, Policies Definition and Management.
REQUIRED EXPERIENCE:
* Bachelor’s and/or master's degree in computer science or equivalent work experience
* Excellent analytical, problem solving, and communication skills.
* 7+ years of software engineering experience in a multi-national company
* 6+ years of development experience in Kotlin, Node and React JS
* 3+ Year(s) experience creating solutions in native public cloud (GCP, AWS or Azure)
* Experience with Git or similar version control system, continuous integration
* Proficiency in automated unit test development practices and design methodologies
* Fluent English
Job Summary
We are seeking a skilled Infrastructure Engineer with 3 to 5 years of experience in Kubernetes to join our team. The ideal candidate will be responsible for managing, scaling, and securing our cloud infrastructure, ensuring high availability and performance. You will work closely with DevOps, SREs, and development teams to optimize our containerized environments and automate deployments.
Key Responsibilities:
- Deploy, manage, and optimize Kubernetes clusters in cloud and/or on-prem environments.
- Automate infrastructure provisioning and management using Terraform, Helm, and CI/CD pipelines.
- Monitor system performance and troubleshoot issues related to containers, networking, and storage.
- Ensure high availability, security, and scalability of Kubernetes workloads.
- Manage logging, monitoring, and alerting using tools like Prometheus, Grafana, and ELK stack.
- Optimize resource utilization and cost efficiency within Kubernetes clusters.
- Implement RBAC, network policies, and security best practices for Kubernetes environments.
- Work with CI/CD pipelines (Jenkins, ArgoCD, GitHub Actions, etc.) to streamline deployments.
- Collaborate with development teams to containerize applications and enhance performance.
- Maintain disaster recovery and backup strategies for Kubernetes workloads.
Required Skills & Qualifications:
- 3 to 5 years of experience in infrastructure and cloud technologies.
- Strong hands-on experience with Kubernetes (K8s), Helm, and container orchestration.
- Experience with cloud platforms (AWS, GCP, Azure) and managed Kubernetes services (EKS, GKE, AKS).
- Proficiency in Terraform, Ansible, or other Infrastructure as Code (IaC) tools.
- Knowledge of Linux system administration, networking, and security.
- Experience with Docker, container security, and runtime optimizations. Hands-on experience in monitoring, logging, and observability tools.
- Scripting skills in Bash, Python, or Go for automation.
- Good understanding of CI/CD pipelines and deployment automation.
- Strong troubleshooting skills and experience handling production incidents
Proficient in Looker Action, Looker Dashboarding, Looker Data Entry, LookML, SQL Queries, BigQuery, LookML, Looker Studio, BigQuery, GCP.
Remote Working
2 pm to 12 am IST or
10:30 AM to 7:30 PM IST
Sunday to Thursday
Responsibilities:
● Create and maintain LookML code, which defines data models, dimensions, measures, and relationships within Looker.
● Develop reusable LookML components to ensure consistency and efficiency in report and dashboard creation.
● Build and customize dashboard to Incorporate data visualizations, such as charts and graphs, to present insights effectively.
● Write complex SQL queries when necessary to extract and manipulate data from underlying databases and also optimize SQL queries for performance.
● Connect Looker to various data sources, including databases, data warehouses, and external APIs.
● Identify and address bottlenecks that affect report and dashboard loading times and Optimize Looker performance by tuning queries, caching strategies, and exploring indexing options.
● Configure user roles and permissions within Looker to control access to sensitive data & Implement data security best practices, including row-level and field-level security.
● Develop custom applications or scripts that interact with Looker's API for automation and integration with other tools and systems.
● Use version control systems (e.g., Git) to manage LookML code changes and collaborate with other developers.
● Provide training and support to business users, helping them navigate and use Looker effectively.
● Diagnose and resolve technical issues related to Looker, data models, and reports.
Skills Required:
● Experience in Looker's modeling language, LookML, including data models, dimensions, and measures.
● Strong SQL skills for writing and optimizing database queries across different SQL databases (GCP/BQ preferable)
● Knowledge of data modeling best practices
● Proficient in BigQuery, billing data analysis, GCP billing, unit costing, and invoicing, with the ability to recommend cost optimization strategies.
● Previous experience in Finops engagements is a plus
● Proficiency in ETL processes for data transformation and preparation.
● Ability to create effective data visualizations and reports using Looker’s dashboard tools.
● Ability to optimize Looker performance by fine-tuning queries, caching strategies, and indexing.
● Familiarity with related tools and technologies, such as data warehousing (e.g., BigQuery ), data transformation tools (e.g., Apache Spark), and scripting languages (e.g., Python).
We’re looking for an experienced Senior Data Engineer to lead the design and development of scalable data solutions at our company. The ideal candidate will have extensive hands-on experience in data warehousing, ETL/ELT architecture, and cloud platforms like AWS, Azure, or GCP. You will work closely with both technical and business teams, mentoring engineers while driving data quality, security, and performance optimization.
Responsibilities:
- Lead the design of data warehouses, lakes, and ETL workflows.
- Collaborate with teams to gather requirements and build scalable solutions.
- Ensure data governance, security, and optimal performance of systems.
- Mentor junior engineers and drive end-to-end project delivery.
Requirements:
- 6+ years of experience in data engineering, including at least 2 full-cycle data warehouse projects.
- Strong skills in SQL, ETL tools (e.g., Pentaho, dbt), and cloud platforms.
- Expertise in big data tools (e.g., Apache Spark, Kafka).
- Excellent communication skills and leadership abilities.
Preferred: Experience with workflow orchestration tools (e.g., Airflow), real-time data, and DataOps practices.

As a Solution Architect, you will collaborate with our sales, presales and COE teams to provide technical expertise and support throughout the new business acquisition process. You will play a crucial role in understanding customer requirements, presenting our solutions, and demonstrating the value of our products.
You thrive in high-pressure environments, maintaining a positive outlook and understanding that career growth is a journey that requires making strategic choices. You possess good communication skills, both written and verbal, enabling you to convey complex technical concepts clearly and effectively. You are a team player, customer-focused, self-motivated, responsible individual who can work under pressure with a positive attitude. You must have experience in managing and handling RFPs/ RFIs, client demos and presentations, and converting opportunities into winning bids. You possess a strong work ethic, positive attitude, and enthusiasm to embrace new challenges. You can multi-task and prioritize (good time management skills), willing to display and learn. You should be able to work independently with less or no supervision. You should be process-oriented, have a methodical approach and demonstrate a quality-first approach.
Ability to convert client’s business challenges/ priorities into winning proposal/ bid through excellence in technical solution will be the key performance indicator for this role.
What you’ll do
- Architecture & Design: Develop high-level architecture designs for scalable, secure, and robust solutions.
- Technology Evaluation: Select appropriate technologies, frameworks, and platforms for business needs.
- Cloud & Infrastructure: Design cloud-native, hybrid, or on-premises solutions using AWS, Azure, or GCP.
- Integration: Ensure seamless integration between various enterprise applications, APIs, and third-party services.
- Design and develop scalable, secure, and performant data architectures on Microsoft Azure and/or new generation analytics platform like MS Fabric.
- Translate business needs into technical solutions by designing secure, scalable, and performant data architectures on cloud platforms.
- Select and recommend appropriate Data services (e.g. Fabric, Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, Power BI etc) to meet specific data storage, processing, and analytics needs.
- Develop and recommend data models that optimize data access and querying. Design and implement data pipelines for efficient data extraction, transformation, and loading (ETL/ELT) processes.
- Ability to understand Conceptual/Logical/Physical Data Modelling.
- Choose and implement appropriate data storage, processing, and analytics services based on specific data needs (e.g., data lakes, data warehouses, data pipelines).
- Understand and recommend data governance practices, including data lineage tracking, access control, and data quality monitoring.
What you will Bring
- 10+ years of working in data analytics and AI technologies from consulting, implementation and design perspectives
- Certifications in data engineering, analytics, cloud, AI will be a certain advantage
- Bachelor’s in engineering/ technology or an MCA from a reputed college is a must
- Prior experience of working as a solution architect during presales cycle will be an advantage
Soft Skills
- Communication Skills
- Presentation Skills
- Flexible and Hard-working
Technical Skills
- Knowledge of Presales Processes
- Basic understanding of business analytics and AI
- High IQ and EQ
Why join us?
- Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
- Gain hands-on experience in content marketing with exposure to real-world projects.
- Opportunity to learn from experienced professionals and enhance your marketing skills.
- Contribute to exciting initiatives and make an impact from day one.
- Competitive stipend and potential for growth within the company.
- Recognized for excellence in data and AI solutions with industry awards and accolades.
POSITION: Sr. Devops Engineer
Job Type: Work From Office (5 days)
Location: Sector 16A, Film City, Noida / Mumbai
Relevant Experience: Minimum 4+ year
Salary: Competitive
Education- B.Tech
About the Company: Devnagri is a AI company dedicated to personalizing business communication and making it hyper-local to attract non-English speakers. We address the significant gap in internet content availability for most of the world’s population who do not speak English. For more detail - Visit www.devnagri.com
We seek a highly skilled and experienced Senior DevOps Engineer to join our dynamic team. As a key member of our technology department, you will play a crucial role in designing and implementing scalable, efficient and robust infrastructure solutions with a strong focus on DevOps automation and best practices.
Roles and Responsibilities
- Design, plan, and implement scalable, reliable, secure, and robust infrastructure architectures
- Manage and optimize cloud-based infrastructure components - Architect and implement containerization technologies, such as Docker, Kubernetes
- Implement the CI/CD pipelines to automate the build, test, and deployment processes
- Design and implement effective monitoring and logging solutions for applications and infrastructure. Establish metrics and alerts for proactive issue identification and resolution
- Work closely with cross-functional teams to troubleshoot and resolve issues.
- Implement and enforce security best practices across infrastructure components
- Establish and enforce configuration standards across various environments.
- Implement and manage infrastructure using Infrastructure as Code principles
- Leverage tools like Terraform for provisioning and managing resources.
- Stay abreast of industry trends and emerging technologies.
- Evaluate and recommend new tools and technologies to enhance infrastructure and operations
Must have Skills:
Cloud ( AWS & GCP ), Redis, MongoDB, MySQL, Docker, bash scripting, Jenkins, Prometheus, Grafana, ELK Stack, Apache, Linux
Good to have Skills:
Kubernetes, Collaboration and Communication, Problem Solving, IAM, WAF, SAST/DAST
Interview Process:
Screening Round then Shortlisting >> 3 technical round >> 1 Managerial round >> HR Closure
with your short success story into Devops and Tech
Cheers
For more details, visit our website- https://www.devnagri.com
Note for approver
Hiring for Big4 Company'
GCP Data engineer
GCP - Mandate
3-7 Years
Gurgaon location
only serving candidate or immediately Joiner can apply
Notice period - less than 30 Days


Job Profile : Python Developer
Job Location : Ahmedabad, Gujarat - On site
Job Type : Full time
Experience - 1-3 Years
Key Responsibilities:
Design, develop, and maintain Python-based applications and services.
Collaborate with cross-functional teams to define, design, and ship new features.
Write clean, maintainable, and efficient code following best practices.
Optimize applications for maximum speed and scalability.
Troubleshoot, debug, and upgrade existing systems.
Integrate user-facing elements with server-side logic.
Implement security and data protection measures.
Work with databases (SQL/NoSQL) and integrate data storage solutions.
Participate in code reviews to ensure code quality and share knowledge with the team.
Stay up-to-date with emerging technologies and industry trends.
Requirements:
1-3 years of professional experience in Python development.
Strong knowledge of Python frameworks such as Django, Flask, or FastAPI.
Experience with RESTful APIs and web services.
Proficiency in working with databases (e.g., PostgreSQL, MySQL, MongoDB).
Familiarity with front-end technologies (e.g., HTML, CSS, JavaScript) is a plus.
Experience with version control systems (e.g., Git).
Knowledge of cloud platforms (e.g., AWS, Azure, Google Cloud) is a plus.
Understanding of containerization tools like Docker and orchestration tools like Kubernetes is good to have
Strong problem-solving skills and attention to detail.
Excellent communication and teamwork skills.
Good to Have:
Experience with data analysis and visualization libraries (e.g., Pandas, NumPy, Matplotlib).
Knowledge of asynchronous programming and event-driven architecture.
Familiarity with CI/CD pipelines and DevOps practices.
Experience with microservices architecture.
Knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch) is a plus.
Hands on experience in RAG and LLM model intergration would be surplus.

Responsibilities: Design, develop, and maintain robust and scalable backend systems using PHP and Laravel framework. Develop and implement RESTful APIs for mobile and web applications. Optimize database performance and ensure data integrity. Implement security best practices to protect sensitive data. Integrate with third-party services and APIs. Write clean, well-documented, and testable code. Participate in code reviews and contribute to improving code quality. Troubleshoot and debug issues, and provide timely resolutions. Contribute to the continuous improvement of the development process. Mentor junior developers (if applicable). Qualifications: 4+ years of experience in backend development, with a strong focus on PHP and Laravel. Deep understanding of Laravel framework, including Eloquent ORM, routing, and templating. Experience with API design and development. Solid understanding of database design and optimization (MySQL). Familiarity with version control systems (e.g., Git). Excellent problem-solving and debugging skills. Strong communication and collaboration skills. Experience with Google Cloud Platform is a plus. Experience in the healthcare domain is a plus.

We're Hiring: Senior Unity Developer (Multiplayer | Mobile)
Hyderabad,Onsite
Full-Time |
4+ Years Experience
Are you passionate about building next-level mobile games? We’re on the lookout for a Senior Unity Developer who thrives in multiplayer environments, loves working with cutting-edge tech (like Photon Fusion), and understands how to architect great player experiences with FSMs and AI bots.
This is your chance to work alongside a world-class creative team and shape the gameplay of tomorrow.
What You’ll Do
Collaborate with developers, designers, and artists to build engaging, high-performance mobile games
Design and implement player-facing gameplay systems and features
Build scalable
Finite State Machines (FSM)
for character, bot, and UI behavior
Lead architecture and ensure code quality, efficiency, and scalability
Optimize and debug gameplay using Unity profiling tools
Mentor junior devs and participate in regular code reviews
Stay ahead of mobile and multiplayer game trends, tools, and tech
Required:
4+ years of Unity development experience (Android/iOS)
Experience in
multiplayer game development
(Photon Fusion or similar)
Solid grasp of
FSM architecture
and modular game logic
Skilled in
C#
, Unity APIs, and optimization for mobile platforms
Experience with
AI-driven bots
, game logic, and event systems
Strong debugging and profiling skills (Unity Profiler, Crashlytics, etc.)
Familiar with Google Play Console / App Store Connect
Excellent communication & teamwork skills
Passion for games and understanding of game design fundamentals
- Required Minimum 3 years of Experience as a Data Engineer
- Database Knowledge: Experience with Timeseries and Graph Database is must along with SQL, PostgreSQL, MySQL, or NoSQL databases like FireStore, MongoDB,
- Data Pipelines: Understanding data Pipeline process like ETL, ELT, Streaming Pipelines with tools like AWS Glue, Google Dataflow, Apache Airflow, Apache NiFi.
- Data Modeling: Knowledge of Snowflake Schema, Fact & Dimension Tables.
- Data Warehousing Tools: Experience with Google BigQuery, Snowflake, Databricks
- Performance Optimization: Indexing, partitioning, caching, query optimization techniques.
- Python or SQL Scripting: Ability to write scripts for data processing and automation
Must be:
- Based in Mumbai
- Comfortable with Work from Office
- Available to join immediately
Responsibilities:
- Manage, monitor, and scale production systems across cloud (AWS/GCP) and on-prem.
- Work with Kubernetes, Docker, Lambdas to build reliable, scalable infrastructure.
- Build tools and automation using Python, Go, or relevant scripting languages.
- Ensure system observability using tools like NewRelic, Prometheus, Grafana, CloudWatch, PagerDuty.
- Optimize for performance and low-latency in real-time systems using Kafka, gRPC, RTP.
- Use Terraform, CloudFormation, Ansible, Chef, Puppet for infra automation and orchestration.
- Load testing using Gatling, JMeter, and ensuring fault tolerance and high availability.
- Collaborate with dev teams and participate in on-call rotations.
Requirements:
- B.E./B.Tech in CS, Engineering or equivalent experience.
- 3+ years in production infra and cloud-based systems.
- Strong background in Linux (RHEL/CentOS) and shell scripting.
- Experience managing hybrid infrastructure (cloud + on-prem).
- Strong testing practices and code quality focus.
- Experience leading teams is a plus.

What you’ll do
- Design, build, and maintain robust ETL/ELT pipelines for product and analytics data
- Work closely with business, product, analytics, and ML teams to define data needs
- Ensure high data quality, lineage, versioning, and observability
- Optimize performance of batch and streaming jobs
- Automate and scale ingestion, transformation, and monitoring workflows
- Document data models and key business metrics in a self-serve way
- Use AI tools to accelerate development, troubleshooting, and documentation
Must-Haves:
- 2–4 years of experience as a data engineer (product or analytics-focused preferred)
- Solid hands-on experience with Python and SQL
- Experience with data pipeline orchestration tools like Airflow or Prefect
- Understanding of data modeling, warehousing concepts, and performance optimization
- Familiarity with cloud platforms (GCP, AWS, or Azure)
- Bachelor's in Computer Science, Data Engineering, or a related field
- Strong problem-solving mindset and AI-native tooling comfort (Copilot, GPTs)
Fynd is India’s largest omnichannel platform and a multi-platform tech company specializing in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries.
What will you do at Fynd?
- Run the production environment by monitoring availability and taking a holistic view of system health.
- Improve reliability, quality, and time-to-market of our suite of software solutions
- Be the 1st person to report the incident.
- Debug production issues across services and levels of the stack.
- Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realise it.
- Building automated tools in Python / Java / GoLang / Ruby etc.
- Help Platform and Engineering teams gain visibility into our infrastructure.
- Lead design of software components and systems, to ensure availability, scalability, latency, and efficiency of our services.
- Participate actively in detecting, remediating and reporting on Production incidents, ensuring the SLAs are met and driving Problem Management for permanent remediation.
- Participate in on-call rotation to ensure coverage for planned/unplanned events.
- Perform other task like load-test & generating system health reports.
- Periodically check for all dashboards readiness.
- Engage with other Engineering organizations to implement processes, identify improvements, and drive consistent results.
- Working with your SRE and Engineering counterparts for driving Game days, training and other response readiness efforts.
- Participate in the 24x7 support coverage as needed Troubleshooting and problem-solving complex issues with thorough root cause analysis on customer and SRE production environments
- Collaborate with Service Engineering organizations to build and automate tooling, implement best practices to observe and manage the services in production and consistently achieve our market leading SLA.
- Improving the scalability and reliability of our systems in production.
- Evaluating, designing and implementing new system architectures.
Some specific Requirements:
- B.E./B.Tech. in Engineering, Computer Science, technical degree, or equivalent work experience
- At least 3 years of managing production infrastructure. Leading / managing a team is a huge plus.
- Experience with cloud platforms like - AWS, GCP.
- Experience developing and operating large scale distributed systems with Kubernetes, Docker and and Serverless (Lambdas)
- Experience in running real-time and low latency high available applications (Kafka, gRPC, RTP)
- Comfortable with Python, Go, or any relevant programming language.
- Experience with monitoring alerting using technologies like Newrelic / zybix /Prometheus / Garafana / cloudwatch / Kafka / PagerDuty etc.
- Experience with one or more orchestration, deployment tools, e.g. CloudFormation / Terraform / Ansible / Packer / Chef.
- Experience with configuration management systems such as Ansible / Chef / Puppet.
- Knowledge of load testing methodologies, tools like Gating, Apache Jmeter.
- Work your way around Unix shell.
- Experience running hybrid clouds and on-prem infrastructures on Red Hat Enterprise Linux / CentOS
- A focus on delivering high-quality code through strong testing practices.
What do we offer?
Growth
Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially.
Flex University: We help you upskill by organising in-house courses on important subjects
Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you.
Culture
Community and Team building activities
Host weekly, quarterly and annual events/parties.
Wellness
Mediclaim policy for you + parents + spouse + kids
Experienced therapist for better mental health, improve productivity & work-life balance
We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!

We are seeking a highly skilled Java full-stack developer with 5–8 years of experience to join our dynamic development team. The ideal candidate will have deep technical expertise across Java, Microservices, React/Redux, Kubernetes, DevOps tools, and GCP. You will work on designing and deploying full-stack applications that are robust, scalable, and aligned with business goals.
Key Responsibilities
- Design, develop, and deploy scalable full-stack applications using Java, React, and Redux
- Build microservices following SOLID principles
- Collaborate with cross-functional team,s including product owners, QA, BAs, and other engineers
- Write clean, maintainable, and efficient code
- Perform debugging, troubleshooting, and optimization
- Participate in code reviews and contribute to engineering best practices
- Stay updated on security, privacy, and compliance requirements
- Work in an Agile/Scrum environment using tools like JIRA and Confluence
Technical Skills Required
Frontend
- Strong proficiency in JavaScript and modern ES6 features
- Expertise in React.js with advanced knowledge of hooks (useCallback, useMemo, etc.)
- Solid understanding of Redux for state management
Backend
- Strong hands-on experience in Java
- Building and maintaining Microservices architectures
DevOps & Infrastructure
- Experience with CI/CD tools: Jenkins, Nexus, Maven, Ansible
- Terraform for infrastructure as code
- Containerization and orchestration using Docker and Kubernetes/GKE
- Experience with IAM, security roles, service accounts
Cloud
- Proficient with any cloud services
Database
- Hands-on experience with PostgreSQL, MySQL, BigQuery
Scripting
- Proficiency in Bash/Shell scripting and Python
Non-Technical Skills
- Strong communication and interpersonal skills
- Ability to work effectively in distributed teams across time zones
- Quick learner and adaptable to new technologies
- Team player with a collaborative mindset
- Ability to explain complex technical concepts to non-technical stakeholders
Nice to Have
- Experience with NetReveal / Detica
Why Join Us?
- 🚀 Challenging Projects: Be part of innovative solutions making a global impact
- 🌍 Global Exposure: Work with international teams and clients
- 📈 Career Growth: Clear pathways for professional advancement
- 🧘♂️ Flexible Work Options: Hybrid and remote flexibility to support work-life balance
- 💼 Competitive Compensation: Industry-leading salary and benefits
Job Title : Lead Java Developer (Backend)
Experience Required : 8 to 15 Years
Open Positions : 5
Location : Any major metro city (Bengaluru, Pune, Chennai, Kolkata, Hyderabad)
Work Mode : Open to Remote / Hybrid / Onsite
Notice Period : Immediate Joiner/30 Days or Less
About the Role :
- We are looking for experienced Lead Java Developers who bring not only strong backend development skills but also a product-oriented mindset and leadership capability.
- This is an opportunity to be part of high-impact digital transformation initiatives that go beyond writing code—you’ll help shape future-ready platforms and drive meaningful change.
- This role is embedded within a forward-thinking digital engineering team that thrives on co-innovation, lean delivery, and end-to-end ownership of platforms and products.
Key Responsibilities :
- Design, develop, and implement scalable backend systems using Java and Spring Boot.
- Collaborate with product managers, designers, and engineers to build intuitive and reliable digital products.
- Advocate and implement engineering best practices : SOLID principles, OOP, clean code, CI/CD, TDD/BDD.
- Lead Agile-based development cycles with a focus on speed, quality, and customer outcomes.
- Guide and mentor team members, fostering technical excellence and ownership.
- Utilize cloud platforms and DevOps tools to ensure performance and reliability of applications.
What We’re Looking For :
- Proven experience in Java backend development (Spring Boot, Microservices).
- 8+ Years of hands-on engineering experience with at least 2+ years in a Lead role.
- Familiarity with cloud platforms such as AWS, Azure, or GCP.
- Good understanding of containerization and orchestration tools like Docker and Kubernetes.
- Exposure to DevOps and Infrastructure as Code practices.
- Strong problem-solving skills and the ability to design solutions from first principles.
- Prior experience in product-based or startup environments is a big plus.
Ideal Candidate Profile :
- A tech enthusiast with a passion for clean code and scalable architecture.
- Someone who thrives in collaborative, transparent, and feedback-driven environments.
- A leader who takes ownership beyond individual deliverables to drive overall team and project success.
Interview Process
- Initial Technical Screening (via platform partner)
- Technical Interview with Engineering Team
- Client-facing Final Round
Additional Info :
- Targeting profiles from product/startup backgrounds.
- Strong preference for candidates with under 1 month of notice period.
- Interviews will be fast-tracked for qualified profiles.
Backend - Software Development Engineer III
Experience - 7+ yrs
About Wekan Enterprise Solutions
Wekan Enterprise Solutions is a leading Technology Consulting company and a strategic investment partner of MongoDB. We help companies drive innovation in the cloud by adopting modern technology solutions that help them achieve their performance and availability requirements. With strong capabilities around Mobile, IOT and Cloud environments, we have an extensive track record helping Fortune 500 companies modernize their most critical legacy and on-premise applications, migrating them to the cloud and leveraging the most cutting-edge technologies.
Job Description
We are looking for passionate software engineers eager to be a part of our growth journey. The right candidate needs to be interested in working in high-paced and challenging environments leading technical teams, designing system architecture and reviewing peer code. Interested in constantly upskilling, learning new technologies and expanding their domain knowledge to new industries. This candidate needs to be a team player and should be looking to help build a culture of excellence. Do you have what it takes?
You will be working on complex data migrations, modernizing legacy applications and building new applications on the cloud for large enterprise and/or growth stage startups. You will have the opportunity to contribute directly into mission critical projects directly interacting with business stakeholders, customers technical teams and MongoDB solutions Architects.
Location - Chennai or Bangalore
- Relevant experience of 7+ years building high-performance back-end applications with at least 3 or more projects delivered using the required technologies
- Good problem solving skills
- Strong mentoring capabilities
- Good understanding of software development life cycle
- Strong experience in system design and architecture
- Strong focus on quality of work delivered
- Excellent verbal and written communication skills
Required Technical Skills
- Extensive hands-on experience building high-performance web back-ends using Node.Js and Javascript/Typescript
- Min two years of hands-on experience in NestJs
- Strong experience with Express.Js framework
- Implementation experience in monolithic and microservices architecture
- Hands-on experience with data modeling on MongoDB and any other Relational or NoSQL databases
- Experience integrating with any 3rd party services such as cloud SDKs (Preferable X), payments, push notifications, authentication etc…
- Hands-on experience with Redis, Kafka, or X
- Exposure into unit testing with frameworks such as Mocha, Chai, Jest or others
- Strong experience writing and maintaining clear documentation
Good to have skills:
- Experience working with common services in any of the major cloud providers - AWS or GCP or Azure
- Technical certifications in AWS / Azure / GCP / MongoDB or other relevant technologies


AI Architect
Location and Work Requirements
- Position is based in KSA or UAE
- Must be eligible to work abroad without restrictions
- Regular travel within the region required
Key Responsibilities
- Minimum 7+ years of experience in Data & Analytics domain and minimum 2 years as AI Architect
- Drive technical solution design engagements and implementations
- Support customer implementations across various deployment modes (Public SaaS, Single-Tenant SaaS, and Self-Managed Kubernetes)
- Provide advanced technical support, including deployment troubleshooting and coordinating with customer AI Architect and product development teams when needed
- Guide customers in implementing generative AI solutions, including LLM integration, vector database management, and prompt engineering
- Coordinate and oversee platform installations and configuration work
- Assist customers with platform integration, including API implementation and custom model deployment
- Establish and promote best practices for AI governance and MLOps
- Proactively identify and address potential technical challenges before they impact customer success
Required Technical Skills
- Strong programming skills in Python with experience in data processing libraries (Pandas, NumPy)
- Proficiency in SQL and experience with various database technologies including MongoDB
- Container technologies: Docker (build, modify, deploy) and Kubernetes (kubectl, helm)
- Version control systems (Git) and CI/CD practices
- Strong networking fundamentals (TCP/IP, SSH, SSL/TLS)
- Shell scripting (Linux/Unix environments)
- Experience in working on on-prem, airgapped environments
- Experience with cloud platforms (AWS, Azure, GCP)
Required AI/ML Skills
- Deep expertise in both predictive machine learning and generative AI technologies
- Proven experience implementing and operationalizing large language models (LLMs)
- Strong knowledge of vector databases, embedding technologies, and similarity search concepts
- Advanced understanding of prompt engineering, LLM evaluation, and AI governance methods
- Practical experience with machine learning deployment and production operations
- Understanding of AI safety considerations and risk mitigation strategies
Required Qualities
- Excellent English communication skills with ability to explain complex technical concepts. Arabic language is advantageous.
- Strong consultative approach to understanding and solving business problems
- Proven ability to build trust through proactive customer engagement
- Strong problem-solving abilities and attention to detail
- Ability to work independently and as part of a distributed team
- Willingness to travel within the Middle East & Africa region as needed

About HighLevel:
HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have ~1200 employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Website - https://www.gohighlevel.com/
YouTube Channel - https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g
Blog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/
Our Customers:
HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 500K businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:
We operate at scale, managing over 40 billion API hits and 120 billion events monthly, with more than 500 micro-services in production. Our systems handle 200+ terabytes of application data and 6 petabytes of storage.
About the Role:
HighLevel Inc. is looking for a Lead SDET with 8-10 years of experience to play a pivotal role in ensuring the quality, performance, and scalability of our products. We are seeking engineers who thrive in a fast-paced startup environment, enjoy problem-solving, and stay updated with the latest models and solutions. This is an exciting opportunity to work on cutting-edge performance testing strategies and drive impactful initiatives across the organisation.
Responsibilities:
- Implement performance, scalability, and reliability testing strategies
- Capture and analyze key performance metrics to identify bottlenecks
- Work closely with development, DevOps, and infrastructure teams to optimize system performance
- Review application architecture and suggest improvements to enhance scalability
- Leverage AI at appropriate layers to improve efficiency and drive positive business outcomes
- Drive performance testing initiatives across the organization and ensure seamless execution
- Automate the capturing of performance metrics and generate performance trend reports
- Research, evaluate, and conduct PoCs for new tools and solutions
- Collaborate with developers and architects to enhance frontend and API performance
- Conduct root cause analysis of performance issues using logs and monitoring tools
- Ensure high availability and reliability of applications and services
Requirements:
- 6-9 years of hands-on experience in Performance, Reliability, and Scalability testing
- Strong skills in capturing, analyzing, and optimizing performance metrics
- Expertise in performance testing tools such as Locust, Gatling, k6, etc.
- Experience working with cloud platforms (Google Cloud, AWS, Azure) and setting up performance testing environments
- Knowledge of CI/CD deployments and integrating performance testing into pipelines
- Proficiency in scripting languages (Python, Java, JavaScript) for test automation
- Hands-on experience with monitoring and observability tools (New Relic, AppDynamics, Prometheus, etc.)
- Strong knowledge of JVM monitoring, thread analysis, and RESTful services
- Experience in optimising frontend performance and API performance
- Ability to deploy applications in Kubernetes and troubleshoot environment issues
- Excellent problem-solving skills and the ability to troubleshoot customer issues effectively
- Experience in increasing application/service availability from 99.9% (three 9s) to 99.99% or higher (four/five 9s)
EEO Statement:
The company is an Equal Opportunity Employer. As an employer subject to affirmative action regulations, we invite you to voluntarily provide the following demographic information. This information is used solely for compliance with government recordkeeping, reporting, and other legal requirements. Providing this information is voluntary and refusal to do so will not affect your application status. This data will be kept separate from your application and will not be used in the hiring decision.

About HighLevel:
HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have ~1200 employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Website: https://www.gohighlevel.com/
YouTube Channel: https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g
Blog Post: https://blog.gohighlevel.com/general-atlantic-joins-highlevel/
Our Customers:
HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 500K businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:
We operate at scale, managing over 40 billion API hits and 120 billion events monthly, with more than 500 micro-services in production. Our systems handle 200+ terabytes of application data and 6 petabytes of storage.
About the Role:
HighLevel Inc. is looking for a SDET III with 5-6 years of experience to play a crucial role in ensuring the quality, performance, and scalability of our products. We are seeking engineers who thrive in a fast-paced startup environment, enjoy problem-solving, and stay updated with the latest models and solutions. This is a great opportunity to work on cutting-edge performance testing strategies and contribute to the success of our products.
Responsibilities:
- Implement performance, scalability, and reliability testing strategies
- Capture and analyze key performance metrics to identify bottlenecks
- Work closely with development, DevOps, and infrastructure teams to optimize system performance
- Develop test strategies based on customer behavior to ensure high-performing applications
- Automate the capturing of performance metrics and generate performance trend reports
- Collaborate with developers and architects to optimize frontend and API performance
- Conduct root cause analysis of performance issues using logs and monitoring tools
- Research, evaluate, and conduct PoCs for new tools and solutions
- Ensure high availability and reliability of applications and services
Requirements:
- 4-7 years of hands-on experience in Performance, Reliability, and Scalability testing
- Strong skills in capturing, analyzing, and optimizing performance metrics
- Expertise in performance testing tools such as Locust, Gatling, k6, etc.
- Experience working with cloud platforms (Google Cloud, AWS, Azure) and setting up performance testing environments
- Knowledge of CI/CD deployments and integrating performance testing into pipelines
- Proficiency in scripting languages (Python, Java, JavaScript) for test automation
- Hands-on experience with monitoring and observability tools (New Relic, AppDynamics, Prometheus, etc.)
- Strong knowledge of JVM monitoring, thread analysis, and RESTful services
- Experience in optimizing frontend performance and API performance
- Ability to deploy applications in Kubernetes and troubleshoot environment issues
- Excellent problem-solving skills and the ability to troubleshoot customer issues effectively
EEO Statement:
The company is an Equal Opportunity Employer. As an employer subject to affirmative action regulations, we invite you to voluntarily provide the following demographic information. This information is used solely for compliance with government recordkeeping, reporting, and other legal requirements. Providing this information is voluntary and refusal to do so will not affect your application status. This data will be kept separate from your application and will not be used in the hiring decision.
We are looking for a Site Reliability Engineer to join our team and help ensure the availability, performance, and scalability of our critical systems. You will work closely with development and operations teams to automate processes, enhance system reliability, and improve observability.
Requirements:
- Experience: 4+ years in Site Reliability Engineering, DevOps, or Cloud Infrastructure roles
- Cloud Expertise: Hands-on experience with GCP and AWS
- Infrastructure as Code (IaC): Terraform, Helm, or equivalent tools
- Containerisation & Orchestration: Docker, Kubernetes (GKE)
- Observability: Experience with Prometheus, Grafana, ELK, OpenTelemetry, or similar monitoring/logging tools
- Programming/Scripting: Proficiency in Python, Bash, or Shell scripting. Basic understanding of API parsing and JSON manipulation
- CI/CD Pipelines: Hands-on experience with Jenkins, GitHub Actions, ArgoCD, or similar tools
- Incident Management: Experience with on-call rotations, SLOs, SLIs, SLAs, Escalation Policies, and incident resolution
- Databases: Experience in monitoring MongoDB, Redis, ES, Queue based etc
Responsibilities:
- Develop and improve observability using monitoring, logging, tracing, and alerting tools (Prometheus, Grafana, ELK, OpenTelemetry, etc.)
- Optimize system performance, troubleshoot incidents, and conduct post-mortems/RCA to prevent future issues
- Collaborate with developers to enhance application reliability, scalability, and performance
- Drive cost optimisation efforts in cloud environments.
- Monitor multiple databases (MongoDB, Redis, ES, Queue based etc.)


Role - AI Architect
Location - Noida/Remote
Mode - Hybrid - 2 days WFO
As an AI Architect at CLOUDSUFI, you will play a key role in driving our AI strategy for customers in the Oil & Gas, Energy, Manufacturing, Retail, Healthcare, and Fintech sectors. You will be responsible for delivering large-scale AI transformation programs for multinational organizations, preferably Fortune 500 companies. You will also lead a team of 10-25 Data Scientists to ensure successful project execution.
Required Experience
● Minimum 12+ years of experience in Data & Analytics domain and minimum 3 years as AI Architect
● Master’s or Ph.D. in a discipline such as Computer Science, Statistics or Applied Mathematics with an emphasis or thesis work on one or more of the following: deep learning, machine learning, Generative AI and optimization.
● Must have experience of articulating and presenting business transformation journey using AI / Gen AI technology to C-Level Executives
● Proven experience in delivering large-scale AI and GenAI transformation programs for multinational organizations
● Strong understanding of AI and GenAI algorithms and techniques
● Must have hands-on experience in open-source software development and cloud native technologies especially on GCP tech stack
● Proficiency in python and prominent ML packages Proficiency in Neural Networks is desirable, though not essential
● Experience leading and managing teams of Data Scientists, Data Engineers and Data Analysts
● Ability to work independently and as part of a team Additional Skills
(Preferred):
● Experience in the Oil & Gas, Energy, Manufacturing, Retail, Healthcare, or Fintech sectors
● Knowledge of cloud platforms (AWS, Azure, GCP)
● GCP Professional Cloud Architect and GCP Professional Machine Learning Engineer Certification
● Experience with AI frameworks and tools (TensorFlow, PyTorch, Keras)

We are looking for a Senior Data Engineer with strong expertise in GCP, Databricks, and Airflow to design and implement a GCP Cloud Native Data Processing Framework. The ideal candidate will work on building scalable data pipelines and help migrate existing workloads to a modern framework.
- Shift: 2 PM 11 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or those with a notice period of up to 30 days
Key Responsibilities:
- Design and implement a GCP Native Data Processing Framework leveraging Spark and GCP Cloud Services.
- Develop and maintain data pipelines using Databricks and Airflow for transforming Raw → Silver → Gold data layers.
- Ensure data integrity, consistency, and availability across all systems.
- Collaborate with data engineers, analysts, and stakeholders to optimize performance.
- Document standards and best practices for data engineering workflows.
Required Experience:
- 7-8 years of experience in data engineering, architecture, and pipeline development.
- Strong knowledge of GCP, Databricks, PySpark, and BigQuery.
- Experience with Orchestration tools like Airflow, Dagster, or GCP equivalents.
- Understanding of Data Lake table formats (Delta, Iceberg, etc.).
- Proficiency in Python for scripting and automation.
- Strong problem-solving skills and collaborative mindset.
⚠️ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!
Best regards,
Vijay S
Assistant Manager - TAG
About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Work you’ll do
As an AI Engineer at Moative, you will be at the forefront of applying cutting-edge AI to solve real-world problems. You will be instrumental in designing and developing intelligent software solutions, leveraging the power of foundation models to automate and optimize critical workflows. Collaborating closely with domain experts, data scientists, and ML engineers, you will integrate advanced ML and AI technologies into both existing and new systems. This role offers a unique opportunity to explore innovative ideas, experiment with the latest foundation models, and build impactful products that directly enhance the lives of citizens by transforming how government services are delivered. You'll be working on challenging and impactful projects that move the needle on traditionally difficult-to-automate processes.
Responsibilities
- Utilize and adapt foundation models, particularly in vision and data extraction, as the core building blocks for developing impactful products aimed at improving government service delivery. This includes prompt engineering, fine-tuning, and evaluating model performance
- Architect, build, and deploy intelligent AI agent-driven workflows that automate and optimize key processes within government service delivery. This encompasses the full lifecycle from conceptualization and design to implementation and monitoring
- Contribute directly to enhancing our model evaluation and monitoring methodologies to ensure robust and reliable system performance. Proactively identify areas for improvement and implement solutions to optimize model accuracy and efficiency
- Continuously learn and adapt to the rapidly evolving landscape of AI and foundation models, exploring new techniques and technologies to enhance our capabilities and solutions
Who you are
You are a passionate and results-oriented engineer who is driven by the potential of AI/ML to revolutionize processes, enhance products, and ultimately improve user experiences. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration.
You are adept at working with real-world, often imperfect data, and have a proven ability to develop, refine, and deploy AI/ML models into production in a cost-effective and scalable manner. You are excited by the prospect of directly impacting government services and making a positive difference in the lives of citizens
Skills & Requirements
- 3+ years of experience in programming languages such as Python or Scala
- Proficient knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization, DevOps (Docker, Kubernetes)
- Tuning and deploying foundation models, particularly for vision tasks and data extraction
- Excellent analytical and problem-solving skills with the ability to break down complex challenges into actionable steps
- Strong written and verbal communication skills, with the ability to effectively articulate technical concepts to both technical and non-technical audiences
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance.
Key Roles & Responsibilities:
- Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
- Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
- Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
- Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
- Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
- Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
- Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
- Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
- Strong expertise in CI/CD pipelines, version control (Git), and release automation.
- Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
- Proficiency in Terraform, Ansible for infrastructure automation.
- Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
- Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Strong scripting and automation skills in Python, Bash, or Go.
Preferred Qualifications
- Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
- Exposure to serverless architectures and event-driven workflows.
- Contributions to open-source DevOps projects.

Immediate Joiners Preferred. Notice Period - Immediate to 30 Days
Interested candidates are requested to email their resumes with the subject line "Application for [Job Title]".
Only applications received via email will be reviewed. Applications through other channels will not be considered.
About Us
adesso India is a dynamic and innovative IT Services and Consulting company based in Kochi. We are committed to delivering cutting-edge solutions that make a meaningful impact on our clients. As we continue to expand our development team, we are seeking a talented and motivated Backend Developer to join us in creating scalable and high-performance backend systems.
Job Description
We are looking for an experienced Backend and Data Developer with expertise in Java, SQL, BigQuery development working on public clouds, mainly GCP. As a Senior Data Developer, you will play a vital role in designing, building, and maintaining robust systems to support our data analytics. This position offers the opportunity to work on complex services, collaborating closely with cross-functional teams to drive successful project delivery.
Responsibilities
- Development and maintenance of data pipelines and automation scripts with Python
- Creation of data queries and optimization of database processes with SQL
- Use of bash scripts for system administration, automation and deployment processes
- Database and cloud technologies
- Managing, optimizing and querying large amounts of data in an Exasol database (prospectively Snowflake)
- Google Cloud Platform (GCP): Operation and scaling of cloud-based BI solutions, in particular
- Composer (Airflow): Orchestration of data pipelines for ETL processes
- Cloud Functions: Development of serverless functions for data processing and automation
- Cloud Scheduler: Planning and automation of recurring cloud jobs
- Cloud Secret Manager: Secure storage and management of sensitive access data and API keys
- BigQuery: Processing, analyzing and querying large amounts of data in the cloud
- Cloud Storage: Storage and management of structured and unstructured data
- Cloud monitoring: monitoring the performance and stability of cloud-based applications
- Data visualization and reporting
- Creation of interactive dashboards and reports for the analysis and visualization of business data with Power BI
Requirements
- Minimum of 4-6 years of experience in backend development, with strong expertise in BigQuery, Python and MongoDB or SQL.
- Strong knowledge of database design, querying, and optimization with SQL and MongoDB and designing ETL and orchestration of data pipelines.
- Expierience of minimum of 2 years with at least one hyperscaler, in best case GCP
- Combined with cloud storage technologies, cloud monitoring and cloud secret management
- Excellent communication skills to effectively collaborate with team members and stakeholders.
Nice-to-Have:
- Knowledge of agile methodologies and working in cross-functional, collaborative teams.

Immediate Joiners Preferred. Notice Period - Immediate to 30 Days
Interested candidates are requested to email their resumes with the subject line "Application for [Job Title]".
Only applications received via email will be reviewed. Applications through other channels will not be considered.
Overview
Adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
As part of our dynamic international cross-functional team you will be responsible for the design, development and deployment of modern high quality software solutions and applications as an experienced and skilled Full-stack PHP Developer.
Responsibilities:
Design, develop, and maintain the application.
Write clean, efficient, and reusable code.
Implement new features and functionality based on business requirements.
Participate in system and application architecture discussions.
Create technical designs and specifications for new features or enhancements.
Write and execute unit tests to ensure code quality.
Debug and resolve technical issues and software defects.
Conduct code reviews to ensure adherence to best practices.
Identify and fix vulnerabilities to ensure application integrity.
Working with other developers to ensure seamless integration backend and frontend elements.
Collaborating with DevOps teams for deployment and scaling.
Requirements:
Bachelor’s degree in computer science or information technology, or a related field.
Proven experience as a skilled Full-stack developer with focus on PHP and web technologies.
Strong experience with PHP 8 and 7, Symfony Framework, AWS / Azure or GCP, GitLab and Angular and / or React. Additional technologies like Java, Python, Go, Kotlin, Rust or similar are welcome.
Experienced with test driven development and QA Tools like f.e. PHPstan, Deptrac.
Excellent problem-solving and debugging skills.
Strong communication and collaboration abilities to work effectively in a team environment.
Skills & Requirements
Full-stack development, PHP 8, PHP 7, Symfony Framework, AWS, Azure, GCP, GitLab, Angular, React, Java, Python, Go, Kotlin, Rust, Test-driven development, QA tools, PHPStan, Deptrac, Problem-solving, Debugging, Communication, Collaboration, System architecture, Technical design, Unit testing, Code review, Deployment, Scalability.
Architect
Experience - 12+ yrs
About Wekan Enterprise Solutions
Wekan Enterprise Solutions is a leading Technology Consulting company and a strategic investment partner of MongoDB. We help companies drive innovation in the cloud by adopting modern technology solutions that help them achieve their performance and availability requirements. With strong capabilities around Mobile, IOT and Cloud environments, we have an extensive track record helping Fortune 500 companies modernize their most critical legacy and on-premise applications, migrating them to the cloud and leveraging the most cutting-edge technologies.
Job Description
We are looking for passionate architects eager to be a part of our growth journey. The right candidate needs to be interested in working in high-paced and challenging environments leading technical teams, designing system architecture and reviewing peer code. Interested in constantly upskilling, learning new technologies and expanding their domain knowledge to new industries. This candidate needs to be a team player and should be looking to help build a culture of excellence. Do you have what it takes?
You will be working on complex data migrations, modernizing legacy applications and building new applications on the cloud for large enterprise and/or growth stage startups. You will have the opportunity to contribute directly into mission critical projects directly interacting with business stakeholders, customer’s technical teams and MongoDB solutions Architects.
Location - Chennai or Bangalore
● Relevant experience of 12+ years building high-performance applications with at least 3+ years as an architect.
● Good problem solving skills
● Strong mentoring capabilities
● Good understanding of software development life cycle
● Strong experience in system design and architecture
● Strong focus on quality of work delivered
● Excellent verbal and written communication skills
Required Technical Skills
● Extensive hands-on experience building high-performance applications using Node.Js (Javascript/Typescript) and .NET/ Golang / Java / Python.
● Strong experience with appropriate framework(s).
● Wellversed in monolithic and microservices architecture.
● Hands-on experience with data modeling on MongoDB and any other Relational or NoSQL databases
● Experience working with 3rd party integrations ranging from authentication, cloud services, etc.
● Hands-on experience with Kafka or RabbitMQ.
● Handsonexperience with CI/CD pipelines and atleast 1 cloud provider- AWS / GCP / Azure
● Strong experience writing and maintaining clear documentation
Good to have skills:
● Experience working with frontend technologies - React.Js or Vue.Js or Angular.
● Extensive experience consulting with customers directly for defining architecture or system design.
● Technical certifications in AWS / Azure / GCP / MongoDB or other relevant technologies


Job Description: Machine Learning Engineer – LLM and Agentic AI
Location: Ahmedabad
Experience: 4+ years
Employment Type: Full-Time
________________________________________
About Us
Join a forward-thinking team at Tecblic, where innovation meets cutting-edge technology. We specialize in delivering AI-driven solutions that empower businesses to thrive in the digital age. If you're passionate about LLMs, machine learning, and pushing the boundaries of Agentic AI, we’d love to have you on board.
________________________________________
Key Responsibilities
• Research and Development: Research, design, and fine-tune machine learning models, with a focus on Large Language Models (LLMs) and Agentic AI systems.
• Model Optimization: Fine-tune and optimize pre-trained LLMs for domain-specific use cases, ensuring scalability and performance.
• Integration: Collaborate with software engineers and product teams to integrate AI models into customer-facing applications and platforms.
• Data Engineering: Perform data preprocessing, pipeline creation, feature engineering, and exploratory data analysis (EDA) to prepare datasets for training and evaluation.
• Production Deployment: Design and implement robust model deployment pipelines, including monitoring and managing model performance in production.
• Experimentation: Prototype innovative solutions leveraging cutting-edge techniques like reinforcement learning, few-shot learning, and generative AI.
• Technical Mentorship: Mentor junior team members on best practices in machine learning and software engineering.
________________________________________
Requirements
Core Technical Skills:
• Proficiency in Python for machine learning and data science tasks.
• Expertise in ML frameworks and libraries like PyTorch, TensorFlow, Hugging Face, Scikit-learn, or similar.
• Solid understanding of Large Language Models (LLMs) such as GPT, T5, BERT, or Bloom, including fine-tuning techniques.
• Experience working on NLP tasks such as text classification, entity recognition, summarization, or question answering.
• Knowledge of deep learning architectures, such as transformers, RNNs, and CNNs.
• Strong skills in data manipulation using tools like Pandas, NumPy, and SQL.
• Familiarity with cloud services like AWS, GCP, or Azure, and experience deploying ML models using tools like Docker, Kubernetes, or serverless functions.
Additional Skills (Good to Have):
• Exposure to Agentic AI (e.g., autonomous agents, decision-making systems) and practical implementation.
• Understanding of MLOps tools (e.g., MLflow, Kubeflow) to streamline workflows and ensure production reliability.
• Experience with generative AI models (GANs, VAEs) and reinforcement learning techniques.
• Hands-on experience in prompt engineering and few-shot/fine-tuned approaches for LLMs.
• Familiarity with vector databases like Pinecone, Weaviate, or FAISS for efficient model retrieval.
• Version control (Git) and familiarity with collaborative development practices.
General Skills:
• Strong analytical and mathematical background, including proficiency in linear algebra, statistics, and probability.
• Solid understanding of algorithms and data structures to solve complex ML problems.
• Ability to handle and process large datasets using distributed frameworks like Apache Spark or Dask (optional but useful).
________________________________________
Soft Skills:
• Excellent problem-solving and critical-thinking abilities.
• Strong communication and collaboration skills to work with cross-functional teams.
• Self-motivated, with a continuous learning mindset to keep up with emerging technologies.

Apply Link - https://tally.so/r/wv0lEA
Key Responsibilities:
- Software Development:
- Design, implement, and optimise clean, scalable, and reliable code across [backend/frontend/full-stack] systems.
- Contribute to the development of micro services, APIs, or UI components as per the project requirements.
- System Architecture:
- Collaborate and design and enhance system architecture.
- Analyse and identify opportunities for performance improvements and scalability.
- Code Reviews and Mentorship:
- Conduct thorough code reviews to ensure code quality, maintainability, and adherence to best practices.
- Mentor and support junior developers, fostering a culture of learning and growth.
- Agile Collaboration:
- Work within an Agile/Scrum framework, participating in sprint planning, daily stand-ups, and retrospectives.
- Collaborate with Carbon Science, Designer, and other stakeholders to translate requirements into technical solutions.
- Problem-Solving:
- Investigate, troubleshoot, and resolve complex issues in production and development environments.
- Contribute to incident management and root cause analysis to improve system reliability.
- Continuous Improvement:
- Stay up-to-date with emerging technologies and industry trends.
- Propose and implement improvements to existing codebases, tools, and development processes.
Qualifications:
Must-Have:
- Experience: 2–5 years of professional software development experience in [specify languages/tools, e.g., Java, Python, JavaScript, etc.].
- Education: Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
- Technical Skills:
- Strong proficiency in [programming languages/frameworks/tools].
- Experience with cloud platforms like AWS, Azure, or GCP.
- Knowledge of version control tools (e.g., Git) and CI/CD pipelines.
- Understanding of data structures, algorithms, and system design principles.
Nice-to-Have:
- Experience with containerisation (e.g., Docker) and orchestration tools (e.g., Kubernetes).
- Knowledge of database technologies (SQL and NoSQL).
Soft Skills:
- Strong analytical and problem-solving skills.
- Excellent written and verbal communication skills.
- Ability to work in a fast-paced environment and manage multiple priorities effectively.

Integrated Technology Solutions for the Entertainment & Leisure Industry
Required Skills:
- AWS
- Azure experience
- Micro services
- Docker
- Kubernetes Containers
- Serverless architecture
- Architected Cloud projects
- Good communication skills
- Minimum 2 years’ experience as an architect.
Job Summary:
As a Technical Architect, you will be responsible for designing, developing, and overseeing the implementation of technical solutions that meet the business needs of the organization. You will work closely with engineering teams to ensure that the architecture is scalable, secure, cost- effective, and aligned with the industry’s best practices. This is an excellent opportunity for someone with deep technical expertise and a passion for shaping the architecture of complex systems.
Key Responsibilities:
- Solution Design & Architecture: Lead the design and implementation of high-performance, scalable, and secure software architectures. Select appropriate technologies, frameworks, and platforms that align with business requirements and goals.
- Collaboration with Stakeholders: Work closely with product managers, business analysts, and development teams to understand the technical and business requirements. Translate those requirements into efficient, effective technical solutions.
- Guiding Development Teams: Provide technical leadership to development teams, ensuring the solution is implemented according to architectural principles and best practices. Offer mentorship and guidance to junior developers and architects.
- Technical Leadership: Provide technical leadership to development teams, ensuring the solution is implemented according to architectural principles and best practices. Offer mentorship and guidance to junior developers and architects.
- System Integration: Define how the application will integrate with other systems, services, or third-party tools. Implement API design and integration strategies for data exchange between various components and external systems. Oversee data flow, and design middleware or message brokers where necessary for smooth interaction between subsystems.
- Technology Evaluation & Integration: Evaluate and select new technologies, tools, and frameworks that improve system efficiency, maintainability, and scalability. Oversee the integration of systems and third- party services.
- Performance Optimization: Design and implement systems for optimal performance, including high availability, disaster recovery, and load balancing. Conduct performance tuning, troubleshoot bottlenecks, and recommend optimization strategies.
- Security & Compliance: Ensure that systems meet security best practices, and compliance standards (e.g., GDPR, HIPAA). Implement robust security protocols, data protection strategies, and threat mitigation methods.
- Documentation & Knowledge Sharing: Maintain up-to-date architecture documentation and ensure knowledge is shared across the technical teams. Promote a culture of continuous improvement and documentation within the team.
- Code Reviews & Quality Assurance: Participate in code reviews to ensure that the development follows architectural guidelines and best practices. Advocate for clean, maintainable, and high-quality code.
- Cost Management: Design cost-effective solutions that optimize resource usage and minimize operational costs, particularly for cloud-based architectures.
Qualifications & Skills:
- Education:
o Bachelor's degree in Engineering, or a related field. PMP, or similar project management certification is a plus.
- Experience:
o 10+ years of experience in software development, with at least 3-4 years in technical architecture or senior technical role.
o Proven experience designing and implementing complex, distributed systems.
- Technical Expertise:
o Strong experience with cloud platforms (AWS, Azure, Google Cloud).
o In-depth knowledge of system architecture patterns (microservices, serverless, event-driven, etc.).
o Expertise in modern programming languages (Java, C#, Python, JavaScript, etc.) and frameworks.
o Experience with databases (relational, NoSQL) and data management strategies.
- Soft Skills:
o Strong communication and interpersonal skills to work effectively with
stakeholders across the organization.
o Leadership and mentoring abilities to guide and inspire development teams.
o Problem-solving mindset with the ability to troubleshoot and resolve complex technical issues.
Overview: We’re seeking a dynamic and results-oriented Field Sales Manager focused on selling innovative cloud-native technology solutions, including modernization, analytics, AI/ML, and Generative AI, specifically within India's vibrant startup ecosystem. If you’re motivated by fast-paced environments, adept at independently generating opportunities, and excel at closing deals, we'd love to connect with you.
Role Description: In this role, you'll independently identify and engage promising startups, execute strategic go-to-market plans, and build meaningful relationships across the AWS startup ecosystem. You’ll work closely with internal pre-sales and solutions teams to position and propose cloud-native solutions effectively, driving significant customer outcomes.
Key Responsibilities:
Identify, prospect, and generate qualified pipeline opportunities independently and through collaboration with the AWS startup ecosystem.
• Conduct comprehensive discovery meetings to qualify potential opportunities, aligning customer needs with our cloud-native solutions
• Collaborate closely with the pre-sales team to develop compelling proposals, presentations, and solution demonstrations.
• Lead end-to-end sales processes, from prospecting and qualification to negotiation and deal closure.
• Build and nurture strong relationships within the startup community, including founders, CTOs, venture capitalists, accelerators, and AWS representatives.
• Stay informed about emerging trends, competitive offerings, and market dynamics within cloud modernization, analytics, AI/ML, and Generative AI.
• Maintain accurate CRM updates, track sales metrics, and regularly report performance and pipeline status to leadership.
Qualifications & Experience:
• BE/BTech/MCA/ME/MTech Only
3-6 years of proven experience in technology field sales, ideally in cloud solutions, analytics, AI/ML, or digital transformation.
• Prior experience selling technology solutions directly to startups or growth-stage companies.
• Demonstrated ability to independently manage end-to-end sales cycles with strong results.
• Familiarity and understanding of AWS ecosystem and cloud-native architectures are highly preferred.
• Excellent relationship-building skills, along with exceptional communication, negotiation, and presentation abilities.
• Ability and willingness to travel as needed to customer sites, industry events, and partner meetings.

Dear,
We are excited to inform you about an exclusive opportunity at Xebia for a Senior Backend Engineer role.
📌 Job Details:
- Role: Senior Backend Engineer
- Shift: 1 PM – 10 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or up to 30 days
🔹 Job Responsibilities:
✅ Design and develop scalable, reliable, and maintainable backend solutions
✅ Work on event-driven microservices architecture
✅ Implement REST APIs and optimize backend performance
✅ Collaborate with cross-functional teams to drive innovation
✅ Mentor junior and mid-level engineers
🔹 Required Skills:
✔ Backend Development: Scala (preferred), Java, Kotlin
✔ Cloud: AWS or GCP
✔ Databases: MySQL, NoSQL (Cassandra)
✔ DevOps & CI/CD: Jenkins, Terraform, Infrastructure as Code
✔ Messaging & Caching: Kafka, RabbitMQ, Elasticsearch
✔ Agile Methodologies: Scrum, Kanban
⚠ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response! Also, feel free to refer anyone in your network who might be a good fit.
Best regards,
Vijay S
Assistant Manager - TAG

JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
About the Job
As a Cloud Backend Engineer you will design, develop, and maintain scalable and reliable backend systems in cloud environments. You will be responsible for building cloud-native applications, optimizing backend performance, and ensuring seamless integration with frontend services and third-party systems.
What You’ll Be Doing
- Backend Development
- Design and implement scalable and high-performance backend services and APIs for cloud-based applications.
- Develop microservices architectures and serverless functions to support business needs.
- Ensure backend systems are secure, reliable, and performant, adhering to best practices and industry standards.
- Cloud Infrastructure and Deployment
- Build and manage cloud infrastructure using platforms such as AWS, Google Cloud Platform (GCP), or Azure.
- Deploy and maintain backend services using cloud-native technologies (e.g., Kubernetes, Docker, AWS Lambda, Google Cloud Functions).
- Implement and manage CI/CD pipelines to automate deployment processes and ensure smooth delivery of updates.
- Performance Optimization
- Monitor and optimize the performance of backend services, including database queries, API responses, and system throughput.
- Implement caching strategies, load balancing, and other performance-enhancing techniques to ensure scalability and responsiveness.
- Troubleshoot and resolve performance issues and system bottlenecks.
- Database Management
- Design and manage relational and NoSQL databases, ensuring data integrity, scalability, and performance.
- Implement data access patterns and optimize queries for efficient data retrieval and storage.
- Ensure backup, recovery, and data security practices are in place.
- Integration and Collaboration
- Collaborate with frontend developers, DevOps engineers, and other stakeholders to integrate backend services with frontend applications and third-party systems.
- Participate in architectural discussions and provide input on system design and technology choices.
- Ensure clear communication and documentation of backend services, APIs, and system interactions.
- Security and Compliance
- Implement security best practices to protect backend services from threats and vulnerabilities.
- Ensure compliance with relevant regulations and standards, including data privacy and protection requirements.
- Conduct regular security assessments and vulnerability scans to maintain system integrity.
- Testing and Quality Assurance
- Develop and maintain automated tests for backend services, including unit tests, integration tests, and end-to-end tests.
- Perform code reviews and participate in quality assurance processes to ensure high code quality and reliability.
- Monitor and address issues identified during testing and production deployments.
- Documentation and Knowledge Sharing
- Document backend services, APIs, and infrastructure setups to facilitate knowledge sharing and support.
- Create and maintain technical documentation, including architecture diagrams, API specifications, and deployment guides.
- Share knowledge and best practices with team members and contribute to a collaborative development environment.
What We Need To See
- Strong experience in backend development, cloud technologies, and distributed systems, with a focus on building robust, high-performance solutions.
- Minimum 5 years of experience in backend development, with a strong focus on cloud-based applications.
- Proven experience with cloud platforms (AWS, GCP, Azure) and cloud-native technologies.
- Experience in designing and implementing RESTful APIs, microservices, and serverless architectures.
- Technical Expertise in:
1. Backend Development
- Strong experience with backend programming languages such as Node.js, Python
- Expertise in working with frameworks such as NestJS, Express.js, or Django.
2. Microservices Architecture
- Experience designing and implementing microservices architectures.
- Knowledge of service discovery, API gateways, and distributed tracing.
3. API Development
- Proficiency in designing, building, and maintaining RESTful and GraphQL APIs.
- Experience with API security, rate limiting, and authentication mechanisms (e.g., JWT, OAuth).
4. Database Management
- Strong knowledge of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g. MongoDB).
- Experience in database schema design, optimization, and management.
5. Cloud Services
- Hands-on experience with cloud platforms such as Azure,AWS or Google Cloud.
6. Performance Optimization
- Experience with performance tuning and optimization of backend services.
7. Security
- Understanding of security best practices and experience implementing secure coding practices.
- Soft Skills:
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration skills, with the ability to work effectively in a team environment.
- Ability to manage multiple priorities and work in a fast-paced, dynamic environment.
What You’ll Do:
* Establish formal data practice for the organisation.
* Build & operate scalable and robust data architectures.
* Create pipelines for the self-service introduction and usage of new data.
* Implement DataOps practices
* Design, Develop, and operate Data Pipelines which support Data scientists and machine learning Engineers.
* Build simple, highly reliable Data storage, ingestion, and transformation solutions which are easy to deploy and manage.
* Collaborate with various business stakeholders, software engineers, machine learning engineers, and analysts.
Who You Are:
* Experience in designing, developing and operating configurable Data pipelines serving high volume and velocity data.
* Experience working with public clouds like GCP/AWS.
* Good understanding of software engineering, DataOps, data architecture, Agile and DevOps methodologies.
* Experience building Data architectures that optimize performance and cost, whether the components are prepackaged or homegrown.
* Proficient with SQL, Java, Spring boot, Python or JVM-based language, Bash.
* Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow etc. and big data databases like BigQuery, Clickhouse, etc
* Good communication skills with the ability to collaborate with both technical and non-technical people.
* Ability to Think Big, take bets and innovate, Dive Deep, Bias for Action, Hire and Develop the Best, Learn and be Curious


Key Responsibilities:
- Design, build, and maintain scalable, real-time data pipelines using Apache Flink (or Apache Spark).
- Work with Apache Kafka (mandatory) for real-time messaging and event-driven data flows.
- Build data infrastructure on Lakehouse architecture, integrating data lakes and data warehouses for efficient storage and processing.
- Implement data versioning and cataloging using Apache Nessie, and optimize datasets for analytics with Apache Iceberg.
- Apply advanced data modeling techniques and performance tuning using Apache Doris or similar OLAP systems.
- Orchestrate complex data workflows using DAG-based tools like Prefect, Airflow, or Mage.
- Collaborate with data scientists, analysts, and engineering teams to develop and deliver scalable data solutions.
- Ensure data quality, consistency, performance, and security across all pipelines and systems.
- Continuously research, evaluate, and adopt new tools and technologies to improve our data platform.
Skills & Qualifications:
- 3–6 years of experience in data engineering, building scalable data pipelines and systems.
- Strong programming skills in Python, Go, or Java.
- Hands-on experience with stream processing frameworks – Apache Flink (preferred) or Apache Spark.
- Mandatory experience with Apache Kafka for stream data ingestion and message brokering.
- Proficiency with at least one DAG-based orchestration tool like Airflow, Prefect, or Mage.
- Solid understanding and hands-on experience with SQL and NoSQL databases.
- Deep understanding of data lakehouse architectures, including internal workings of data lakes and data warehouses, not just usage.
- Experience working with at least one cloud platform, preferably AWS (GCP or Azure also acceptable).
- Strong knowledge of distributed systems, data modeling, and performance optimization.
Nice to Have:
- Experience with Apache Doris or other MPP/OLAP databases.
- Familiarity with CI/CD pipelines, DevOps practices, and infrastructure-as-code in data workflows.
- Exposure to modern data version control and cataloging tools like Apache Nessie.
Overview
adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
We are seeking a skilled Cloud Data Engineer who has experience with cloud data platforms like AWS or Azure and especially Snowflake and dbt to join our dynamic team. As a consultant, you will be responsible for developing new data platforms and create the data processes. You will collaborate with cross-functional teams to design, develop, and deploy high-quality frontend solutions.
Responsibilities:
Customer consulting: You develop data-driven products in the Snowflake Cloud and connect data & analytics with specialist departments. You develop ELT processes using dbt (data build tool)
Specifying requirements: You develop concrete requirements for future-proof cloud data architectures.
Develop data routes: You design scalable and powerful data management processes.
Analyze data: You derive sound findings from data sets and present them in an understandable way.
Requirements:
Requirements management and project experience: You successfully implement cloud-based data & analytics projects.
Data architectures: You are proficient in DWH/data lake concepts and modeling with Data Vault 2.0.
Cloud expertise: You have extensive knowledge of Snowflake, dbt and other cloud technologies (e.g. MS Azure, AWS, GCP).
SQL know-how: You have a sound and solid knowledge of SQL.
Data management: You are familiar with topics such as master data management and data quality.
Bachelor's degree in computer science, or a related field.
Strong communication and collaboration abilities to work effectively in a team environment.
Skills & Requirements
Cloud Data Engineering, AWS, Azure, Snowflake, dbt, ELT processes, Data-driven consulting, Cloud data architectures, Scalable data management, Data analysis, Requirements management, Data warehousing, Data lake, Data Vault 2.0, SQL, Master data management, Data quality, GCP, Strong communication, Collaboration.
Overview:
We are seeking a talented and experienced GCP Data Engineer with strong expertise in Teradata, ETL, and Data Warehousing to join our team. As a key member of our Data Engineering team, you will play a critical role in developing and maintaining data pipelines, optimizing ETL processes, and managing large-scale data warehouses on the Google Cloud Platform (GCP).
Responsibilities:
- Design, implement, and maintain scalable ETL pipelines on GCP (Google Cloud Platform).
- Develop and manage data warehouse solutions using Teradata and cloud-based technologies (BigQuery, Cloud Storage, etc.).
- Build and optimize high-performance data pipelines for real-time and batch data processing.
- Integrate, transform, and load large datasets into GCP-based data lakes and data warehouses.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
- Write efficient, clean, and reusable code for ETL processes and data workflows.
- Ensure data quality, consistency, and integrity across all pipelines and storage solutions.
- Implement data governance practices and ensure security and compliance of data processes.
- Monitor and troubleshoot data pipeline performance and resolve issues proactively.
- Participate in the design and implementation of scalable data architectures using GCP services like BigQuery, Cloud Dataflow, and Cloud Pub/Sub.
- Optimize and automate data workflows for continuous improvement.
- Maintain up-to-date documentation of data pipeline architectures and processes.
Requirements:
Technical Skills:
- Google Cloud Platform (GCP): Extensive experience with BigQuery, Cloud Storage, Cloud Dataflow, and Cloud Composer.
- ETL Tools: Expertise in building ETL pipelines using tools such as Apache NiFi, Apache Beam, or custom Python-based scripts.
- Data Warehousing: Strong experience working with Teradata for data warehousing, including data modeling, schema design, and performance tuning.
- SQL: Advanced proficiency in SQL and relational databases, particularly in the context of Teradata and GCP environments.
- Programming: Proficient in Python, Java, or Scala for building and automating data processes.
- Data Architecture: Knowledge of best practices in designing scalable data architectures for both structured and unstructured data.
Experience:
- Proven experience as a Data Engineer, with a focus on building and managing ETL pipelines and data warehouse solutions.
- Hands-on experience in data modeling and working with complex, high-volume data in a cloud-based environment.
- Experience with data migration from on-premises to cloud environments (Teradata to GCP).
- Familiarity with Data Lake concepts and technologies.
- Experience with version control systems like Git and working in Agile environments.
- Knowledge of CI/CD and automation processes in data engineering.
Soft Skills:
- Strong problem-solving and troubleshooting skills.
- Excellent communication skills, both verbal and written, for interacting with technical and non-technical teams.
- Ability to work collaboratively in a fast-paced, cross-functional team environment.
- Strong attention to detail and ability to prioritize tasks.
Preferred Qualifications:
- Experience with other GCP tools such as Dataproc, Bigtable, Cloud Functions.
- Knowledge of Terraform or similar infrastructure-as-code tools for managing cloud resources.
- Familiarity with data governance frameworks and data privacy regulations.
- Certifications in Google Cloud or Teradata are a plus.
Benefits:
- Competitive salary and performance-based bonuses.
- Health, dental, and vision insurance.
- 401(k) with company matching.
- Paid time off and flexible work schedules.
- Opportunities for professional growth and development.
Backend - Software Development Engineer II
Experience - 4+ yrs
About Wekan Enterprise Solutions
Wekan Enterprise Solutions is a leading Technology Consulting company and a strategic investment partner of MongoDB. We help companies drive innovation in the cloud by adopting modern technology solutions that help them achieve their performance and availability requirements. With strong capabilities around Mobile, IOT and Cloud environments, we have an extensive track record helping Fortune 500 companies modernize their most critical legacy and on-premise applications, migrating them to the cloud and leveraging the most cutting-edge technologies.
Job Description
We are looking for passionate software engineers eager to be a part of our growth journey. The right candidate needs to be interested in working in high-paced and challenging environments. Interested in constantly upskilling, learning new technologies and expanding their domain knowledge to new industries. This candidate needs to be a team player and should be looking to help build a culture of excellence. Do you have what it takes?
You will be working on complex data migrations, modernizing legacy applications and building new applications on the cloud for large enterprise and/or growth stage startups. You will have the opportunity to contribute directly into mission critical projects directly interacting with business stakeholders, customer’s technical teams and MongoDB solutions Architects.
Location - Bangalore
Basic qualifications:
- Good problem solving skills
- Deep understanding of software development life cycle
- Excellent verbal and written communication skills
- Strong focus on quality of work delivered
- Relevant experience of 4+ years building high-performance backend applications with, at least 2 or more projects implemented using the required technologies
Required Technical Skills:
- Extensive hands-on experience building high-performance web back-ends using Node.Js. Having 3+ hands-on experience in Node.JS and Javascript/Typescript and minimum
- Hands-on project experience with Nest.Js
- Strong experience with Express.Js framework
- Hands-on experience in data modeling and schema design in MongoDB
- Experience integrating with any 3rd party services such as cloud SDKs, payments, push notifications, authentication etc…
- Exposure into unit testing with frameworks such as Mocha, Chai, Jest or others
- Strong experience writing and maintaining clear documentation
Good to have skills:
- Experience working with common services in any of the major cloud providers - AWS or GCP or Azure
- Experience with microservice architecture
- Experience working with other Relational and NoSQL Databases
- Experience with technologies such as Kafka and Redis
- Technical certifications in AWS / Azure / GCP / MongoDB or other relevant technologies
Overview
Adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job description
As part of our dynamic international cross-functional team you will be responsible for the design, development and deployment of modern high quality software solutions and applications as an experienced and skilled Full-stack developer.
Responsibilities:
Design, develop, and maintain the application
Write clean, efficient, and reusable code
Implement new features and functionality based on business requirements
Participate in system and application architecture discussions
Create technical designs and specifications for new features or enhancements
Write and execute unit tests to ensure code quality
Debug and resolve technical issues and software defects
Conduct code reviews to ensure adherence to best practices
Identify and fix vulnerabilities to ensure application integrity
Working with other developers to ensure seamless integration backend and frontend elements
Collaborating with DevOps teams for deployment and scaling
Requirements:
Bachelor’s degree in computer science or information technology, or a related field.
Proven experience as a skilled Full-stack developer. Experience in Utilities / Energy domain is appreciated.
Strong experience with Java (Springboot), AWS / Azure or GCP, GitLab and Angular and / or React. Additional technologies like Python, Go, Kotlin, Rust or similar are welcome
Excellent problem-solving and debugging skills.
Strong communication and collaboration abilities to work effectively in a team environment.
Skills & Requirements
Java, Spring Boot, AWS, Azure, GCP, GitLab, Angular, React, Python, Go, Kotlin, Rust, Full-stack development, Software architecture, Unit testing, Debugging, Code reviews, DevOps collaboration, Microservices, Cloud computing, RESTful APIs, Frontend-backend integration, Problem-solving, Communication, Team collaboration, Software deployment, Application security, Technical documentation.