
DataHavn IT Solutions is a company that specializes in big data and cloud computing, artificial intelligence and machine learning, application development, and consulting services. We want to be in the frontrunner into anything to do with data and we have the required expertise to transform customer businesses by making right use of data.
About the Role:
As a Data Scientist specializing in Google Cloud, you will play a pivotal role in driving data-driven decision-making and innovation within our organization. You will leverage the power of Google Cloud's robust data analytics and machine learning tools to extract valuable insights from large datasets, develop predictive models, and optimize business processes.
Key Responsibilities:
- Data Ingestion and Preparation:
- Design and implement efficient data pipelines for ingesting, cleaning, and transforming data from various sources (e.g., databases, APIs, cloud storage) into Google Cloud Platform (GCP) data warehouses (BigQuery) or data lakes (Dataflow).
- Perform data quality assessments, handle missing values, and address inconsistencies to ensure data integrity.
- Exploratory Data Analysis (EDA):
- Conduct in-depth EDA to uncover patterns, trends, and anomalies within the data.
- Utilize visualization techniques (e.g., Tableau, Looker) to communicate findings effectively.
- Feature Engineering:
- Create relevant features from raw data to enhance model performance and interpretability.
- Explore techniques like feature selection, normalization, and dimensionality reduction.
- Model Development and Training:
- Develop and train predictive models using machine learning algorithms (e.g., linear regression, logistic regression, decision trees, random forests, neural networks) on GCP platforms like Vertex AI.
- Evaluate model performance using appropriate metrics and iterate on the modeling process.
- Model Deployment and Monitoring:
- Deploy trained models into production environments using GCP's ML tools and infrastructure.
- Monitor model performance over time, identify drift, and retrain models as needed.
- Collaboration and Communication:
- Work closely with data engineers, analysts, and business stakeholders to understand their requirements and translate them into data-driven solutions.
- Communicate findings and insights in a clear and concise manner, using visualizations and storytelling techniques.
Required Skills and Qualifications:
- Strong proficiency in Python or R programming languages.
- Experience with Google Cloud Platform (GCP) services such as BigQuery, Dataflow, Cloud Dataproc, and Vertex AI.
- Familiarity with machine learning algorithms and techniques.
- Knowledge of data visualization tools (e.g., Tableau, Looker).
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
- Strong communication and interpersonal skills.
Preferred Qualifications:
- Experience with cloud-native data technologies (e.g., Apache Spark, Kubernetes).
- Knowledge of distributed systems and scalable data architectures.
- Experience with natural language processing (NLP) or computer vision applications.
- Certifications in Google Cloud Platform or relevant machine learning frameworks.

Similar jobs
JOB DESCRIPTION:
Location: bangalore
Mode of Work : 3 days from Office
DSA(Collections, Hash maps, trees, Linkedlist and Arrays, etc), Core Oops concepts(Multithreading, Multi Processing, Polymorphism, Inheritance etc) Annotations in Spring and Spring boot, Java 8 Vital features, database Optimization, Microsevices and Rest API
- Design, develop, and maintain low-latency, high-performance enterprise applications using Core Java (Java 5.0 and above).
- Implement and integrate APIs using Spring Framework and Apache CXF.
- Build microservices-based architecture for scalable and distributed systems.
- Collaborate with cross-functional teams for high/low-level design, development, and deployment of software solutions.
- Optimize performance through efficient multithreading, memory management, and algorithm design.
- Ensure best coding practices, conduct code reviews, and perform unit/integration testing.
- Work with RDBMS (preferably Sybase) for backend data integration.
- Analyze complex business problems and deliver innovative technology solutions in the financial/trading domain.
- Work in Unix/Linux environments for deployment and troubleshooting.
Required Skills & Experience:
- 3– 8 years of strong hands-on experience in Core Java, Spring boot, microservices, data structures.
- Deep understanding of Object-Oriented Programming (OOP), data structures, algorithms, and design patterns.
- Proven experience with Spring, CXF, and REST/SOAP web services.
- Solid understanding of microservices architecture and distributed systems.
- Experience working with Sybase or other relational databases.
- Expertise in multithreading, concurrency, and high-throughput server-side development.
- Exposure to capital markets, sales & trading platforms, or similar financial services applications.
- Good working knowledge of Unix/Linux environments.
- Experience in solution architecture and design documentation is a strong plus.
- Strong problem-solving skills, analytical thinking, and a proactive attitude.
- Excellent communication and interpersonal skills to work effectively with global teams.
Preferred Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
- Previous experience in investment banking, capital markets, or financial technology domains
Position: MS Dynamics 2012 - Technical Consultant(Lead/Architect)
Experience: 5+ yyear in MS Dynamics AX technical consultant
Location: Bangalore/ Hyderabad (office/ Hybrid)
salary: Negotiable
For this role, the candidate needs to support AX implementations, providing end user support and developing functional skill levels of AX. The role holder will own the gathering requirements and defining these user stories in collabo ation with other team members.
Required Skills:
- X++ Programming: Expert knowledge of X++ and development within Microsoft Dynamics AX 2012.
- AIF (Application Integration Framework): Strong experience in AX 2012 integration with third-party systems.
- SSRS Reporting: Skilled in designing and developing custom reports using SQL Server Reporting Services (SSRS).
- Data Migration: Experience in data migration using DIXF and other data tools.
- SQL Server: Strong understanding of SQL Server and ability to optimize database performance.
- MorphX Development Environment: Familiarity with MorphX and related development tools.
- Performance Tuning: Proven experience in system performance optimization, both at the application and database levels.
Experience & Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or related f ield.
- 4+ years of hands-on experience with Microsoft Dynamics AX 2012 development.
- Experience with at least one full lifecycle AX implementation.
- Knowledge of Microsoft Dynamics 365 F&O is a plus.
- Good understanding of ERP processes across modules like Finance, Supply Chain, and Manufacturing.
- Strong analytical and problem-solving skills.
Preferred Certifications:
- Microsoft Dynamics AX 2012 Development Introduction
- Microsoft Dynamics AX 2012 Installation and Configuration
- Microsoft Dynamics AX 2012 Trade and Logistics
mail updated resume with below details-
Current CTC-
Expected ctc-
Notice epriod-
Total experience-
relevant experience-
do you have any certification-
email: jobs[at]glansolutions[dot]com
satish: 8851O 181 62
Video Engineer – Software Engineering and Media Processing @IXG Inc.
Location: Bangalore
About the Role
IXG is building the future of cloud-native, GPU-powered remote video production and
media transport. We’re looking for a Video Engineer with deep expertise in C/C++ / Rust,
GStreamer, and FFmpeg to help architect the next-gen media stack — designed for realtime streaming, low-latency workflows, and automated broadcast delivery.
What You'll Work On
• Design and optimize video/audio pipelines using GStreamer and FFmpeg
• Integrate modern streaming protocols: SRT, NDI, RIST, RTMP
• Work with codecs and containers: H.264, H.265, AAC, Opus, MPEG-TS, MP4, and
others
• Troubleshoot latency, sync, buffer, and transport issues in live or automated media
environments
• Collaborate with backend and edge software teams to deliver high-performance,
distributed video systems
• Stay ahead of the curve on broadcast and streaming technologies
What We’re Looking For
• 3-5 years of experience in Software Engineering & Media Processing
• Proficiency in C and C++ or Rust, with experience building performance-critical
applications
• Strong experience working with GStreamer and FFmpeg in production
• Deep understanding of video/audio codecs, mux/demux, and streaming containers
• Hands-on experience with SRT, NDI, RIST, RTMP, or similar protocols
• Knowledge of media sync, buffering, and low-latency optimization techniques
• Comfortable working in Linux environments, debugging across layers
Bonus If You Have
• Experience with WebRTC or ultra-low latency video delivery
• Worked on GPU-based encoding/decoding (e.g., NVIDIA NVENC, Intel QuickSync)
• Familiarity with SMPTE standards, SDI workflows, or AES67 audio
• Exposure to cloud-based video processing stacks (e.g., AWS Media Services)
Why Work With IXG
We’re not retrofitting old tech for new workflows — we’re reimagining media infrastructure from the ground up, combining GPU-accelerated encoders, edge
hardware, and smart cloud transport.
Join us to help media teams go remote, at scale,
without compromising on quality or latency.
• Work from Bangalore
• Be part of a high-agency, technically deep team
• Contribute to real-world deployments powering sports, news, and esports coverage
Sound like your kind of gig?
Apply now
#Hiring #VideoEngineer #GStreamer #FFmpeg #Cplusplus #StreamingTech
#BroadcastEngineering #LowLatencyVideo #IXG
Job Overview:
We are looking for a Senior Analyst who has led teams and managed system operations.
Key Responsibilities:
- Lead and mentor a team of analysts to drive high-quality execution.
- Design, write, and optimize SQL queries to derive actionable insights.
- Manage, monitor, and enhance Payment Governance Systems for accuracy and efficiency.
- Work cross-functionally with Finance, Tech, and Operations teams to maintain data integrity.
- Build and automate dashboards/reports to track key metrics and system performance.
- Identify anomalies and lead root cause analysis for payment-related issues.
- Define and document processes, SOPs, and governance protocols.
- Ensure compliance with internal control frameworks and audit readiness.
Requirements:
We require candidates with the following qualifications:
- 3–5 years of experience in analytics, data systems, or operations.
- Proven track record of leading small to mid-size teams.
- Strong command over SQL and data querying techniques.
- Experience with payment systems, reconciliation, or financial data platforms.
- Analytical mindset with problem-solving abilities.
- Ability to work in a fast-paced, cross-functional environment.
- Excellent communication and stakeholder management skills.
• Responsibilities:-
1.Developing new user-facing features using React.js
2.Building reusable components and front-end libraries for future use
3.Translating designs and wireframes into high quality code
4.Optimizing components for maximum performance across a vast array of web-capable devices and browsers
• Skills:-
1. Strong proficiency in JavaScript, including DOM manipulation and the JavaScript object model
2. Thorough understanding of React.js and its core principles
3. Experience with popular React.js workflows (such as Flux or Redux)
4. Familiarity with newer specifications of ECMAScript
5. Experience with data structure libraries (e.g., Immutable.js)
6. Familiarity with RESTful APIs
7. Knowledge of modern authorization mechanisms, such as JSON Web Token
8. Familiarity with modern front-end build pipelines and tools
9. Experience with common front-end development tools such as Babel, Webpack, NPM, etc.
10. Ability to understand business requirements and translate them into technical requirements
11. A knack for benchmarking and optimization
12. Familiarity with code versioning tools such as Git
Position: DevOps Lead
Job Description
● Research, evangelize and implement best practices and tools for GitOps, DevOps, continuous integration, build automation, deployment automation, configuration management, infrastructure as code.
● Develop software solutions to support DevOps tooling; including investigation of bug fixes, feature enhancements, and software/tools updates
● Participate in the full systems life cycle with solution design, development, implementation, and product support using Scrum and/or other Agile practices
● Evaluating, implementing, and streamlining DevOps practices.
● Design and drive the implementation of fully automated CI/CD pipelines.
● Designing and creating Cloud services and architecture for highly available and scalable environments. Lead the monitoring, debugging, and enhancing pipelines for optimal operation and performance. Supervising, examining, and handling technical operations.
Qualifications
● 5 years of experience in managing application development, software delivery lifecycle, and/or infrastructure development and/or administration
● Experience with source code repository management tools, code merge and quality checks, continuous integration, and automated deployment & management using tools like Bitbucket, Git, Ansible, Terraform, Artifactory, Service Now, Sonarqube, Selenium.
● Minimum of 4 years of experience with approaches and tooling for automated build, delivery, and release of the software
● Experience and/or knowledge of CI/CD tools: Jenkins, Bitbucket Pipelines, Gitlab CI, GoCD.
● Experience with Linux systems: CentOS, RHEL, Ubuntu, Secure Linux... and Linux Administration.
● Minimum of 4 years experience with managing medium/large teams including progress monitoring and reporting
● Experience and/or knowledge of Docker, Cloud, and Orchestration: GCP, AWS, Kubernetes.
● Experience and/or knowledge of system monitoring, logging, high availability, redundancy, autoscaling, and failover.
● Experience automating manual and/or repetitive processes.
● Experience and/or knowledge with networking and load balancing: Nginx, Firewall, IP network
Your Responsibilities:
- Own the backend stack – Python based, that powers our product
- Collaborate with Data Scientists, Backend Developers(Node.js), Front-end developers, DevOps to design and implement new features
- Build and maintain several Backend Jobs and REST’ful Services which will be used internally in a Macroservices/Distributed services environment.
- Deploy and monitor the Jobs and endpoints ensuring availability and scalability(ability to handle 100X data processing load)
- Work on full project lifecycle starting from requirements gathering/understanding the problem to deploying and maintaining the project.
Skills that you bring Along:
- A minimum 8+ years of extensive work experience with Python and related frameworks – particularly Flask.
- Extensive experience in designing and scheduling backend Python jobs
- Hands on working in different file formats like Json, Parquet, csv etc. coming from Data Science side.
- Extensive experience with databases such as Postgres and Mongo.
- Extensive experience in Cloud Infrastructure (AWS based) – e.g. AWS API Gateway, Lambda Functions etc.
- Experience with cache like Redis and/or Memory cache
- Good experience in Microservices/Macroservices or Event driven Architectures
- Good experience with design patterns
- Experience in writing advanced SQL-queries, good knowledge of PL/SQL
- Good understanding of Software Design Principles and domain-driven design
- Good experience with Continuous Delivery and Containerization(Docker)
- Good experience in designing and maintaining REST’ful API endpoints
- Ideally maintaining infrastructure-as-code using Terraform
- Ideally experience in parallel data processing and building end-to-end Data Pipelines using tools such as Airflow/Prefect and Spark/Dask
- Excellent communication skills and the ability to explain complex topics in a simple manner









