

Data Axle
https://data-axle.com/About
Data Axle is a product company that offers various data and technology solutions, including software-as-a-service (SaaS) and data-as-a-service (DaaS). These solutions help businesses manage and leverage data for marketing, sales, and business intelligence.
They are data-driven marketing solutions provider that helps clients with clean data, lead generation, strategy development, campaign design, and day-to-day execution needs. It solves the problem of inaccurate and incomplete data, enabling businesses to make informed decisions and drive growth. Data Axle operates in various industries, including healthcare, finance, retail, and technology.
About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales, and research for over 50 years in the USA. Data Axle now has an established strategic global center of excellence in Pune. This center delivers mission
critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business and consumer databases.
Data Axle India is recognized as a Great Place to Work!
This prestigious designation is a testament to our collective efforts in fostering an exceptional workplace culture and creating an environment where every team member can thrive.
Tech stack




Candid answers by the company
Data Axle is a product company which offers various data and technology solutions, including software-as-a-service (SaaS) and data-as-a-service (DaaS).
They helps clients create a foundation of clean data, find new leads, develop winning strategies and design beautiful campaigns.
Jobs at Data Axle
Job Summary:
We are looking for a hands-on Java Application Engineer with 4–5 years of solid development experience to take ownership of a business-critical application currently in business continuity mode. While the application is not in active feature expansion, it is functionally essential and requires regular code enhancements, performance tuning, and reliable production support.
The role involves contributing to ongoing development work, fixing issues, implementing functional improvements, and supporting smooth operations. Looking ahead after couple of years, this role will be central to modernizing and re-architecting the application to align with cloud-native and scalable architecture patterns.
Key Responsibilities:
- Take ownership of an existing Java-based application, ensuring it runs smoothly in production.
- Implement functional enhancements, refactoring, and performance improvements as needed.
- Troubleshoot and resolve production issues with a focus on long-term stability and root cause elimination.
- Collaborate with DevOps, QA, and cross-functional teams to deploy safe, incremental changes.
- Participate in planning for the future modernization of the application - including architectural improvements and cloud readiness.
- Provide occasional support during US business hours to address critical issues or coordinate with global stakeholders when needed.
Required Skills:
- 5–8 years of hands-on Java development experience using Core Java, Spring, JSP/Servlets, and SQL
- Strong understanding of backend application design, debugging, and production support
- Working knowledge of WildFly and/or Apache Tomcat application servers is an asset
- Familiarity with CI/CD pipelines, GitHub, and modern build tools
- Clear understanding of REST APIs, SOAP APIs, logging frameworks, and error-handling patterns
- Strong problem-solving skills and a proactive ownership mindset

We are looking for a Senior Software Engineer with 5+ years of experience in modern C++ development, paired with strong hands-on skills in AWS, Node.js, data processing, and containerized service development. The ideal candidate will be responsible for building scalable systems, maintaining complex data pipelines, and modernizing applications through cloud-native approaches and automation.
This is a high-impact role where engineering depth meets platform evolution, ideal for someone who thrives in system-level thinking, data-driven applications, and full-stack delivery.
Key Responsibilities:
- Design, build, and maintain high-performance systems using modern C++
- Develop and deploy scalable backend services using Node.js and manage dependencies via NPM
- Architect and implement containerized services using Docker, with orchestration via Kubernetes or ECS
- Build, monitor, and maintain data ingestion, transformation, and enrichment pipelines
- Utilize AWS services (Lambda, EC2, S3, CloudWatch, Step Functions) to deliver reliable cloud-native solutions
- Implement and maintain modern CI/CD pipelines, ensuring seamless integration, testing, and delivery
- Participate in system design, peer code reviews, and performance tuning
Required Skills:
- 5+ years of software development experience, with strong command over modern C++
- Solid experience with Node.js, JavaScript, and NPM for backend development
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and managing applications in the cloud
- Proficient in building and scaling data processing workflows and working with structured/unstructured data
- Strong hands-on experience with Docker, container orchestration, and microservices architecture
- Working knowledge of CI/CD practices, Git, and build/release tools
- Strong problem-solving, debugging, and cross-functional collaboration skills
Preferred / Nice to Have:
- Exposure to data streaming frameworks (Kafka, Spark, etc.)
- Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack)
- Background in performance profiling, secure coding, or legacy modernization
- Ability to work in agile environments and lead small technical initiatives


About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 50 years in the USA. Data Axle now as an established strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases.
Data Axle Pune is pleased to have achieved certification as a Great Place to Work!
Roles & Responsibilities:
We are looking for a Senior Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.
We are looking for a Senior Data Scientist who will be responsible for:
- Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
- Design or enhance ML workflows for data ingestion, model design, model inference and scoring
- Oversight on team project execution and delivery
- Establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
- Visualize and publish model performance results and insights to internal and external audiences
Qualifications:
- Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
- Minimum of 5 years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
- Proven ability to manage the output of a small team in a fast-paced environment and to lead by example in the fulfilment of client requests
- Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
- Proficiency in Python and SQL required; PySpark/Spark experience a plus
- Ability to conduct a productive peer review and proper code structure in Github
- Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
- Working knowledge of modern CI/CD methods This position description is intended to describe the duties most frequently performed by an individual in this position.
It is not intended to be a complete list of assigned duties but to describe a position level.


About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 45 years in the USA. Data Axle has set up a strategic global center of excellence in Pune. This center delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases. Data Axle is headquartered in Dallas, TX, USA.
Roles and Responsibilities:
- Design, implement, and manage scalable analytical data infrastructure, enabling efficient access to large datasets and high-performance computing on Google Cloud Platform (GCP).
- Develop and optimize data pipelines using GCP-native services like BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Data Fusion, and Cloud Storage.
- Work with diverse data sources to extract, transform, and load data into enterprise-grade data lakes and warehouses, ensuring high availability and reliability.
- Implement and maintain real-time data streaming solutions using Pub/Sub, Dataflow, and Kafka.
- Research and integrate the latest big data and visualization technologies to enhance analytics capabilities and improve efficiency.
- Collaborate with cross-functional teams to implement machine learning models and AI-driven analytics solutions using Vertex AI and BigQuery ML.
- Continuously improve existing data architectures to support scalability, performance optimization, and cost efficiency.
- Enhance data security and governance by implementing industry best practices for access control, encryption, and compliance.
- Automate and optimize data workflows to simplify reporting, dashboarding, and self-service analytics using Looker and Data Studio.
Basic Qualifications
- 7+ years of experience in data engineering, software development, business intelligence, or data science, with expertise in large-scale data processing and analytics.
- Strong proficiency in SQL and experience with BigQuery for data warehousing.
- Hands-on experience in designing and developing ETL/ELT pipelines using GCP services (Cloud Composer, Dataflow, Dataproc, Data Fusion, or Apache Airflow).
- Expertise in distributed computing and big data processing frameworks, such as Apache Spark, Hadoop, or Flink, particularly within Dataproc and Dataflow environments.
- Experience with business intelligence and data visualization tools, such as Looker, Tableau, or Power BI.
- Knowledge of data governance, security best practices, and compliance requirements in cloud environments.
Preferred Qualifications:
- Degree/Diploma in Computer Science, Engineering, Mathematics, or a related technical field.
- Experience working with GCP big data technologies, including BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud SQL.
- Hands-on experience with real-time data processing frameworks, including Kafka and Apache Beam.
- Proficiency in Python, Java, or Scala for data engineering and pipeline development.
- Familiarity with DevOps best practices, CI/CD pipelines, Terraform, and infrastructure-as-code for managing GCP resources.
- Experience integrating AI/ML models into data workflows, leveraging BigQuery ML, Vertex AI, or TensorFlow.
- Understanding of Agile methodologies, software development life cycle (SDLC), and cloud cost optimization strategies.



Roles & Responsibilities:
We are looking for a Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.
We are looking for a Lead Data Scientist who will be responsible for
- Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
- Design or enhance ML workflows for data ingestion, model design, model inference and scoring 3. Oversight on team project execution and delivery
- Establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
- Visualize and publish model performance results and insights to internal and external audiences
Qualifications:
- Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
- Minimum of 9+ years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
- Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
- Proficiency in Python and SQL required; PySpark/Spark experience a plus
- Ability to conduct a productive peer review and proper code structure in Github
- Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
- Working knowledge of modern CI/CD methods
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.
About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales, and research for over 50 years in the USA. Data Axle now has an established strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business and consumer databases.
Data Axle India is recognized as a Great Place to Work! This prestigious designation is a testament to our collective efforts in fostering an exceptional workplace culture and creating an environment where every team member can thrive.
General Summary:
As a Digital Data Management Architect, you will design, implement, and optimize advanced data management systems that support processing billions of digital transactions, ensuring high availability and accuracy. You will leverage your expertise in developing identity graphs, real-time data processing, and API integration to drive insights and enhance user experiences across digital platforms. Your role is crucial in building scalable and secure data architectures that support real-time analytics, identity resolution, and seamless data flows across multiple systems and applications.
Roles and Responsibilities:
- Data Architecture & System Design:
- Design and implement scalable data architectures capable of processing billions of digital transactions in real-time, ensuring low latency and high availability.
- Architect data models, workflows, and storage solutions to enable seamless real-time data processing, including stream processing and event-driven architectures.
- Identity Graph Development:
- Lead the development and maintenance of a comprehensive identity graph to unify disparate data sources, enabling accurate identity resolution across channels.
- Develop algorithms and data matching techniques to enhance identity linking, while maintaining data accuracy and privacy.
- Real-Time Data Processing & Analytics:
- Implement real-time data ingestion, processing, and analytics pipelines to support immediate data availability and actionable insights.
- Work closely with engineering teams to integrate and optimize real-time data processing frameworks such as Apache Kafka, Apache Flink, or Spark Streaming.
- API Development & Integration:
- Design and develop real-time APIs that facilitate data access and integration across internal and external platforms, focusing on security, scalability, and performance.
- Collaborate with product and engineering teams to define API specifications, data contracts, and SLAs to meet business and user requirements.
- Data Governance & Security:
- Establish data governance practices to maintain data quality, privacy, and compliance with regulatory standards across all digital transactions and identity graph data.
- Ensure security protocols and access controls are embedded in all data workflows and API integrations to protect sensitive information.
- Collaboration & Stakeholder Engagement:
- Partner with data engineering, analytics, and product teams to align data architecture with business requirements and strategic goals.
- Provide technical guidance and mentorship to junior architects and data engineers, promoting best practices and continuous learning.
Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
- 10+ years of experience in data architecture, digital data management, or a related field, with a proven track record in managing billion+ transactions.
- Deep experience with identity resolution techniques and building identity graphs.
- Strong proficiency in real-time data processing technologies (e.g., Kafka, Flink, Spark) and API development (RESTful and/or GraphQL).
- In-depth knowledge of database systems (SQL, NoSQL), data warehousing solutions, and cloud-based platforms (AWS, Azure, or GCP).
- Familiarity with data privacy regulations (e.g., GDPR, CCPA) and data governance best practices.
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.



General Summary:
The Senior Software Engineer will be responsible for designing, developing, testing, and maintaining full-stack solutions. This role involves hands-on coding (80% of time), performing peer code reviews, handling pull requests and engaging in architectural discussions with stakeholders. You'll contribute to the development of large-scale, data-driven SaaS solutions using best practices like TDD, DRY, KISS, YAGNI, and SOLID principles. The ideal candidate is an experienced full-stack developer who thrives in a fast-paced, Agile environment.
Essential Job Functions:
- Design, develop, and maintain scalable applications using Python and Django.
- Build responsive and dynamic user interfaces using React and TypeScript.
- Implement and integrate GraphQL APIs for efficient data querying and real-time updates.
- Apply design patterns such as Factory, Singleton, Observer, Strategy, and Repository to ensure maintainable and scalable code.
- Develop and manage RESTful APIs for seamless integration with third-party services.
- Design, optimize, and maintain SQL databases like PostgreSQL, MySQL, and MSSQL.
- Use version control systems (primarily Git) and follow collaborative workflows.
- Work within Agile methodologies such as Scrum or Kanban, participating in daily stand-ups, sprint planning, and retrospectives.
- Write and maintain unit tests, integration tests, and end-to-end tests, following Test-Driven Development (TDD).
- Collaborate with cross-functional teams, including Product Managers, DevOps, and UI/UX Designers, to deliver high-quality products
Essential functions are the basic job duties that an employee must be able to perform, with or without reasonable accommodation. The function is considered essential if the reason the position exists is to perform that function.
Supportive Job Functions:
- Remain knowledgeable of new emerging technologies and their impact on internal systems.
- Available to work on call when needed.
- Perform other miscellaneous duties as assigned by management.
These tasks do not meet the Americans with Disabilities Act definition of essential job functions and usually equal 5% or less of time spent. However, these tasks still constitute important performance aspects of the job.
Skills
- The ideal candidate must have strong proficiency in Python and Django, with a solid understanding of Object-Oriented Programming (OOPs) principles. Expertise in JavaScript,
- TypeScript, and React is essential, along with hands-on experience in GraphQL for efficient data querying.
- The candidate should be well-versed in applying design patterns such as Factory, Singleton, Observer, Strategy, and Repository to ensure scalable and maintainable code architecture.
- Proficiency in building and integrating REST APIs is required, as well as experience working with SQL databases like PostgreSQL, MySQL, and MSSQL.
- Familiarity with version control systems (especially Git) and working within Agile methodologies like Scrum or Kanban is a must.
- The candidate should also have a strong grasp of Test-Driven Development (TDD) principles.
- In addition to the above, it is good to have experience with Next.js for server-side rendering and static site generation, as well as knowledge of cloud infrastructure such as AWS or GCP.
- Familiarity with NoSQL databases, CI/CD pipelines using tools like GitHub Actions or Jenkins, and containerization technologies like Docker and Kubernetes is highly desirable.
- Experience with microservices architecture and event-driven systems (using tools like Kafka or RabbitMQ) is a plus, along with knowledge of caching technologies such as Redis or
- Memcached. Understanding OAuth2.0, JWT, SSO authentication mechanisms, and adhering to API security best practices following OWASP guidelines is beneficial.
- Additionally, experience with Infrastructure as Code (IaC) tools like Terraform or CloudFormation, and familiarity with performance monitoring tools such as New Relic or Datadog will be considered an advantage.
Abilities:
- Ability to organize, prioritize, and handle multiple assignments on a daily basis.
- Strong and effective inter-personal and communication skills
- Ability to interact professionally with a diverse group of clients and staff.
- Must be able to work flexible hours on-site and remote.
- Must be able to coordinate with other staff and provide technological leadership.
- Ability to work in a complex, dynamic team environment with minimal supervision.
- Must possess good organizational skills.
Education, Experience, and Certification:
- Associate or bachelor’s degree preferred (Computer Science, Engineer, etc.), but equivalent work experience in a technology related area may substitute.
- 2+ years relevant experience, required.
- Experience using version control daily in a developer environment.
- Experience with Python, JavaScript, and React is required.
- Experience using rapid development frameworks like Django or Flask.
- Experience using front end build tools.
Scope of Job:
- No direct reports.
- No supervisory responsibility.
- Consistent work week with minimal travel
- Errors may be serious, costly, and difficult to discover.
- Contact with others inside and outside the company is regular and frequent.
- Some access to confidential data.

Similar companies
About the company
TIGI HR Solution Pvt. Ltd. is recognized as a market leader in the field of technology-based staffing. TIGI HR is widely considered as India's most trusted and sought-after recruitment brand, making it an ideal choice for businesses interested in outsourcing personnel. The company operates with the goal and vision of fostering the creation of as many leaders and employment around the globe as is reasonably feasible. TIGI HR has, in a very short amount of time, developed a large-scale firm that is sustainable over the long term. The company places a high priority on speed, scalability, and predictability in its operations, which helps it to establish and maintain long-term trust relationships with both prospective clients and existing ones.
As it keeps an eye on the newest trends and technology that are available to HR professionals, TIGI HR is constantly one step ahead of the competition. In a very short amount of time, TIGI HR utilizes AI-based Recruit Robots and TMS technology to attract a potential talent pool.
Jobs
144
About the company
At Torero Softwares Ltd, we build next-gen ERP solutions that power businesses in healthcare, pharma, FMCG, distribution, and retail. With 25+ years of expertise and a 3,500+ client base, our flagship product, Medica Ultimate™, helps companies streamline operations, boost efficiency, and stay compliant.
Why Join Us?
🚀 Fast-Growing Tech Company – Work on industry-leading SaaS & ERP solutions
💡 Innovation-Driven – Be part of a team solving real-world business challenges
📈 Career Acceleration – Hands-on learning, mentorship & growth opportunities
📚 Collaborative Culture – Work alongside tech experts in a dynamic environment
Whether in sales, implementation, or customer success, you'll help transform businesses with technology.
📍 Location: Lower Parel, Mumbai
📩 Contact: Simran Jain | ✉ [email protected] | 📞 7666159684 (Call/WhatsApp)
Jobs
2
About the company
Jobs
0
About the company
OneSpider Technologies LLP is a leading provider of software and mobile application solutions for the Pharma and FMCG sector. Our products help distributors and retailers streamline operations.
Jobs
3
About the company
Jobs
3
About the company
Jobs
3
About the company
Jobs
10
About the company
Mega Style Apartments, based in Melbourne Australia, provides fully-furnished, premium 1 to 4-bedroom apartments accommodation. We let them out to business travellers, families, students and other professionals for short- and mid-term stays, providing a “hotel-like” experience with the space and comfort of a private flat.
Jobs
1
About the company
Jobs
5