11+ Primary Research Jobs in Pune | Primary Research Job openings in Pune
Apply to 11+ Primary Research Jobs in Pune on CutShort.io. Explore the latest Primary Research Job opportunities across top companies like Google, Amazon & Adobe.


Job Summary:
We are seeking a skilled Python Developer with a strong foundation in Artificial Intelligence and Machine Learning. You will be responsible for designing, developing, and deploying intelligent systems that leverage large datasets and cutting-edge ML algorithms to solve real-world problems.
Key Responsibilities:
- Design and implement machine learning models using Python and libraries like TensorFlow, PyTorch, or Scikit-learn.
- Perform data preprocessing, feature engineering, and exploratory data analysis.
- Develop APIs and integrate ML models into production systems using frameworks like Flask or FastAPI.
- Collaborate with data scientists, DevOps engineers, and backend teams to deliver scalable AI solutions.
- Optimize model performance and ensure robustness in real-time environments.
- Maintain clear documentation of code, models, and processes.
Required Skills:
- Proficiency in Python and ML libraries (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch).
- Strong understanding of ML algorithms (classification, regression, clustering, deep learning).
- Experience with data pipeline tools (e.g., Airflow, Spark) and cloud platforms (AWS, Azure, or GCP).
- Familiarity with containerization (Docker, Kubernetes) and CI/CD practices.
- Solid grasp of RESTful API development and integration.
Preferred Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Data Science, or related field.
- 2–5 years of experience in Python development with a focus on AI/ML.
- Exposure to MLOps practices and model monitoring tools.
Responsibilities:
• Designing Hive/HCatalog data model includes creating table definitions, file formats, compression techniques for Structured & Semi-structured data processing
• Implementing Spark processing based ETL frameworks
• Implementing Big data pipeline for Data Ingestion, Storage, Processing & Consumption
• Modifying the Informatica-Teradata & Unix based data pipeline
• Enhancing the Talend-Hive/Spark & Unix based data pipelines
• Develop and Deploy Scala/Python based Spark Jobs for ETL processing
• Strong SQL & DWH concepts.
Preferred Background:
• Function as integrator between business needs and technology solutions, helping to create technology solutions to meet clients’ business needs
• Lead project efforts in defining scope, planning, executing, and reporting to stakeholders on strategic initiatives
• Understanding of EDW system of business and creating High level design document and low level implementation document
• Understanding of Big Data Lake system of business and creating High level design document and low level implementation document
• Designing Big data pipeline for Data Ingestion, Storage, Processing & Consumption
- Min 3 yrs of experience in Software Development
- 2+ yrs of experience in Blockchain & Hyperledger experience is a must
- Good experience with Microservices, Docker, Kubernetes, Devops, and Cloud Technologies.
- Experience with Java / Golang / Python / Nodejs / ROR / C# or any OO Language.
- Nice to have CHFA or CHFD certifications
Responsibilities:
- Manage the entire sales cycle
- Finding a client from existing resources
- Calling on Leads to arrange Demos
- Follow on leads till the conversion
- Create and share proposals
- Provide professional after-sales support to improve customer relationships
- Present periodic reports and accountability on the inside sales activities
- Establish and maintain strong relationships with clients and new prospects
Requirements:
- Minimum 2-3 years of experience in B2B Edtech or education
- Must be an exceptional communicator
- Good at email Writing
- MBA preferred, otherwise minimum of a Bachelor’s Degree.
- Must be self-motivated, flexible, collaborative, with an eagerness to learn

- 5-7 years of web application and web site development with Drupal (corporate web sites, ecommerce sites, portals, mobile sites).
- 4+ years managing web application projects.
- Thorough understanding of the Software Development Lifecycle (e.g. Requirements, Design, Development, Testing) and exposure to Agile or iterative SDLCs.
- Experience with one or more modern JavaScript frameworks (React, Angular, Vue) React preferred
- Experience writing semantic, responsive HTML.
- Experience writing object-oriented PHP.
- Experience implementing web solutions in Drupal and PHP.
- Experience with Drupal’s theme layer.
- Experience with Drupal’s module system and experience writing or extending modules.
- Experience with Drupal’s Migrate framework.
- Experience with Drupal JavaScript behaviors.
- Experience with Acquia provided tooling and development workflows for Drupal 8/9 preferred.
- Experience implementing, accessibility standards (Section 508, WCAG).
- Experience with CSS preprocessors (Sass, Less).
- Experience with version control tools (Git).
ears of Exp: 3-6+ Years
Skills: Scala, Python, Hive, Airflow, SparkLanguages: Java, Python, Shell Scripting
GCP: BigTable, DataProc, BigQuery, GCS, Pubsub
OR
AWS: Athena, Glue, EMR, S3, RedshiftMongoDB, MySQL, Kafka
Platforms: Cloudera / Hortonworks
AdTech domain experience is a plus.
Job Type - Full Time
Responsibilities
Understand business requirement and actively provide inputs from Data perspective.
Experience of SSIS development.
Experience in Migrating SSIS packages to Azure SSIS Integrated Runtime
Experience in Data Warehouse / Data mart development and migration
Good knowledge and Experience on Azure Data Factory
Expert level knowledge of SQL DB & Datawarehouse
Should know at least one programming language (python or PowerShell)
Should be able to analyse and understand complex data flows in SSIS.
Knowledge on Control-M
Knowledge of Azure data lake is required.
Excellent interpersonal/communication skills (both oral/written) with the ability to communicate
at various levels with clarity & precision.
Build simple to complex pipelines & dataflows.
Work with other Azure stack modules like Azure Data Lakes, SQL DW, etc.
Requirements
Bachelor’s degree in Computer Science, Computer Engineering, or relevant field.
A minimum of 5 years’ experience in a similar role.
Strong knowledge of database structure systems and data mining.
Excellent organizational and analytical abilities.
Outstanding problem solver.
Good written and verbal communication skills.