50+ NumPy Jobs in India
Apply to 50+ NumPy Jobs on CutShort.io. Find your next job, effortlessly. Browse NumPy Jobs and apply today!

Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence.
The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
- Design, build, and maintain scalable data pipelines for structured and unstructured data sources
- Develop ETL processes to collect, clean, and transform data from internal and external systems
- Support integration of data into dashboards, analytics tools, and reporting systems
- Collaborate with data analysts and software developers to improve data accessibility and performance
- Document workflows and maintain data infrastructure best practices
- Assist in identifying opportunities to automate repetitive data tasks
Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
▪ Design, build, and maintain scalable data pipelines for structured and unstructured data sources
▪ Develop ETL processes to collect, clean, and transform data from internal and external systems
▪ Support integration of data into dashboards, analytics tools, and reporting systems
▪ Collaborate with data analysts and software developers to improve data accessibility and performance
▪ Document workflows and maintain data infrastructure best practices
▪ Assist in identifying opportunities to automate repetitive data tasks

AccioJob is conducting a Walk-In Hiring Drive with Global Consulting and Services for the position of Python Automation Engineer.
To apply, register and select your slot here: https://go.acciojob.com/b7BZZZ
Required Skills: Excel, Python, Panda, Numpy, SQL
Eligibility:
- Degree: BTech./BE, MTech./ME, BCA, MCA, BSc., MSc
- Branch: All
- Graduation Year: 2023, 2024, 2025
Work Details:
- Work Location: Pune (Onsite)
- CTC: 3 LPA to 6 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Pune Centre
Further Rounds (for shortlisted candidates only):
Profile & Background Screening Round,
Technical Interview 1
Technical Interview 2
Tech+Managerial Round
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/b7BZZZ
Or, apply through our newly launched app:https://go.acciojob.com/4wvBDe

About Moative
Moative, an Applied AI company, designs and builds transformation AI solutions for traditional industries in energy, utilities, healthcare & lifesciences, and more. Through Moative Labs, we build AI micro-products and launch AI startups with partners in vertical markets that align with our theses.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Our Team: Our team of 20+ employees consist of data scientists, AI/ML Engineers, and mathematicians from top engineering and research institutes such as IITs, CERN, IISc, UZH, Ph.Ds. Our team includes academicians, IBM Research Fellows, and former founders.
Work you’ll do
As a Data Scientist at Moative, you’ll play a crucial role in extracting valuable insights from data to drive informed decision-making. You’ll work closely with cross-functional teams to build predictive models and develop solutions to complex business problems. You will also be involved in conducting experiments, building POCs and prototypes.
Responsibilities
- Support end-to-end development and deployment of ML/ AI models - from data preparation, data analysis and feature engineering to model development, validation and deployment
- Gather, prepare and analyze data, write code to develop and validate models, and continuously monitor and update them as needed.
- Collaborate with domain experts, engineers, and stakeholders in translating business problems into data-driven solutions
- Document methodologies and results, present findings and communicate insights to non-technical audiences
Skills & Requirements
- Proficiency in Python and familiarity with basic Python libraries for data analysis and ML algorithms (such as NumPy, Pandas, ScikitLearn, NLTK).
- Strong understanding and experience with data analysis, statistical and mathematical concepts and ML algorithms
- Working knowledge of cloud platforms (e.g., AWS, Azure, GCP).
- Broad understanding of data structures and data engineering.
- Strong communication skills
- Strong collaboration skills, continuous learning attitude and a problem solving mind-set
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less. Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply here. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to be present in the city. We intend to move to a hybrid model in a few months time.


Lead Data Scientist role
Work Location- Remote
Exp-7+ Years Relevant
Notice Period- Immediate
Job Overview:
We are seeking a highly skilled and experienced Senior Data Scientist with expertise in Machine Learning (ML), Natural Language Processing (NLP), Generative AI (GenAI) and Deep Learning (DL).
Mandatory Skills:
• 5+ years of work experience in writing code in Python
• Experience in using various Python libraries like Pandas, NumPy
• Experience in writing good quality code in Python and code refactoring techniques (e.g.,IDE’s – PyCharm, Visual Studio Code; Libraries – Pylint, pycodestyle, pydocstyle, Black)
• Strong experience on AI assisted coding experience.
• AI assisted coding for existing IDE's like vscode.
• Experimented multiple AI assisted tools and done research around it.
• Deep understanding of data structures, algorithms, and excellent problem-solving skills
• Experience in Python, Exploratory Data Analysis (EDA), Feature Engineering, Data Visualisation
• Machine Learning libraries like Scikit-learn, XGBoost
• Experience in CV, NLP or Time Series.
• Experience in building models for ML tasks (Regression, Classification)
• Should have Experience into LLM, LLM Fine Tuning, Chatbot, RAG Pipeline Chatbot, LLM Solution, Multi Modal LLM Solution, GPT, Prompt, Prompt Engineering, Tokens, Context Window, Attention Mecanism, Embeddings
• Experience of model training and serving on any of the cloud environments (AWS, GCP,Azure)
• Experience in distributed training of models on Nvidia GPU’s
• Familiarity in Dockerizing the model and create model end points (Rest or gRPC)
• Strong working knowledge of source code control tools such as Git, Bitbucket
• Prior experience of designing, developing and maintaining Machine Learning solution through its Life Cycle is highly advantageous
• Strong drive to learn and master new technologies and techniques
• Strong communication and collaboration skills
• Good attitude and self-motivated
Mandatory Skills- *Strong Python coding, Machine Learning, Software Engineering, Deep Learning, Generative AI, LLM, AI Assisted coding tools.*


About Us
Alfred Capital - Alfred Capital is a next-generation on-chain proprietary quantitative trading technology provider, pioneering fully autonomous algorithmic systems that reshape trading and capital allocation in decentralized finance.
As a sister company of Deqode — a 400+ person blockchain innovation powerhouse — we operate at the cutting edge of quant research, distributed infrastructure, and high-frequency execution.
What We Build
- Alpha Discovery via On‑Chain Intelligence — Developing trading signals using blockchain data, CEX/DEX markets, and protocol mechanics.
- DeFi-Native Execution Agents — Automated systems that execute trades across decentralized platforms.
- ML-Augmented Infrastructure — Machine learning pipelines for real-time prediction, execution heuristics, and anomaly detection.
- High-Throughput Systems — Resilient, low-latency engines that operate 24/7 across EVM and non-EVM chains tuned for high-frequency trading (HFT) and real-time response
- Data-Driven MEV Analysis & Strategy — We analyze mempools, order flow, and validator behaviors to identify and capture MEV opportunities ethically—powering strategies that interact deeply with the mechanics of block production and inclusion.
Evaluation Process
- HR Discussion – A brief conversation to understand your motivation and alignment with the role.
- Initial Technical Interview – A quick round focused on fundamentals and problem-solving approach.
- Take-Home Assignment – Assesses research ability, learning agility, and structured thinking.
- Assignment Presentation – Deep-dive into your solution, design choices, and technical reasoning.
- Final Interview – A concluding round to explore your background, interests, and team fit in depth.
- Optional Interview – In specific cases, an additional round may be scheduled to clarify certain aspects or conduct further assessment before making a final decision.
Job Description : Blockchain Data & ML Engineer
As a Blockchain Data & ML Engineer, you’ll work on ingesting and modelling on-chain behaviour, building scalable data pipelines, and designing systems that support intelligent, autonomous market interaction.
What You’ll Work On
- Build and maintain ETL pipelines for ingesting and processing blockchain data.
- Assist in designing, training, and validating machine learning models for prediction and anomaly detection.
- Evaluate model performance, tune hyperparameters, and document experimental results.
- Develop monitoring tools to track model accuracy, data drift, and system health.
- Collaborate with infrastructure and execution teams to integrate ML components into production systems.
- Design and maintain databases and storage systems to efficiently manage large-scale datasets.
Ideal Traits
- Strong in data structures, algorithms, and core CS fundamentals.
- Proficiency in any programming language
- Familiarity with backend systems, APIs, and database design, along with a basic understanding of machine learning and blockchain fundamentals.
- Curiosity about how blockchain systems and crypto markets work under the hood.
- Self-motivated, eager to experiment and learn in a dynamic environment.
Bonus Points For
- Hands-on experience with pandas, numpy, scikit-learn, or PyTorch.
- Side projects involving automated ML workflows, ETL pipelines, or crypto protocols.
- Participation in hackathons or open-source contributions.
What You’ll Gain
- Cutting-Edge Tech Stack: You'll work on modern infrastructure and stay up to date with the latest trends in technology.
- Idea-Driven Culture: We welcome and encourage fresh ideas. Your input is valued, and you're empowered to make an impact from day one.
- Ownership & Autonomy: You’ll have end-to-end ownership of projects. We trust our team and give them the freedom to make meaningful decisions.
- Impact-Focused: Your work won’t be buried under bureaucracy. You’ll see it go live and make a difference in days, not quarters
What We Value:
- Craftsmanship over shortcuts: We appreciate engineers who take the time to understand the problem deeply and build durable solutions—not just quick fixes.
- Depth over haste: If you're the kind of person who enjoys going one level deeper to really "get" how something works, you'll thrive here.
- Invested mindset: We're looking for people who don't just punch tickets, but care about the long-term success of the systems they build.
- Curiosity with follow-through: We admire those who take the time to explore and validate new ideas, not just skim the surface.
Compensation:
- INR 6 - 12 LPA
- Performance Bonuses: Linked to contribution, delivery, and impact.

About Us
Alfred Capital - Alfred Capital is a next-generation on-chain proprietary quantitative trading technology provider, pioneering fully autonomous algorithmic systems that reshape trading and capital allocation in decentralized finance.
As a sister company of Deqode — a 400+ person blockchain innovation powerhouse — we operate at the cutting edge of quant research, distributed infrastructure, and high-frequency execution.
What We Build
- Alpha Discovery via On‑Chain Intelligence — Developing trading signals using blockchain data, CEX/DEX markets, and protocol mechanics.
- DeFi-Native Execution Agents — Automated systems that execute trades across decentralized platforms.
- ML-Augmented Infrastructure — Machine learning pipelines for real-time prediction, execution heuristics, and anomaly detection.
- High-Throughput Systems — Resilient, low-latency engines that operate 24/7 across EVM and non-EVM chains tuned for high-frequency trading (HFT) and real-time response
- Data-Driven MEV Analysis & Strategy — We analyze mempools, order flow, and validator behaviors to identify and capture MEV opportunities ethically—powering strategies that interact deeply with the mechanics of block production and inclusion.
Evaluation Process
- HR Discussion – A brief conversation to understand your motivation and alignment with the role.
- Initial Technical Interview – A quick round focused on fundamentals and problem-solving approach.
- Take-Home Assignment – Assesses research ability, learning agility, and structured thinking.
- Assignment Presentation – Deep-dive into your solution, design choices, and technical reasoning.
- Final Interview – A concluding round to explore your background, interests, and team fit in depth.
- Optional Interview – In specific cases, an additional round may be scheduled to clarify certain aspects or conduct further assessment before making a final decision.
Blockchain Data & ML Engineer
As a Blockchain Data & ML Engineer, you’ll work on ingesting and modeling on-chain behavior, building scalable data pipelines, and designing systems that support intelligent, autonomous market interaction.
What You’ll Work On
- Build and maintain ETL pipelines for ingesting and processing blockchain data.
- Assist in designing, training, and validating machine learning models for prediction and anomaly detection.
- Evaluate model performance, tune hyperparameters, and document experimental results.
- Develop monitoring tools to track model accuracy, data drift, and system health.
- Collaborate with infrastructure and execution teams to integrate ML components into production systems.
- Design and maintain databases and storage systems to efficiently manage large-scale datasets.
Ideal Traits
- Strong in data structures, algorithms, and core CS fundamentals.
- Proficiency in any programming language
- Curiosity about how blockchain systems and crypto markets work under the hood.
- Self-motivated, eager to experiment and learn in a dynamic environment.
Bonus Points For
- Hands-on experience with pandas, numpy, scikit-learn, or PyTorch.
- Side projects involving automated ML workflows, ETL pipelines, or crypto protocols.
- Participation in hackathons or open-source contributions.
What You’ll Gain
- Cutting-Edge Tech Stack: You'll work on modern infrastructure and stay up to date with the latest trends in technology.
- Idea-Driven Culture: We welcome and encourage fresh ideas. Your input is valued, and you're empowered to make an impact from day one.
- Ownership & Autonomy: You’ll have end-to-end ownership of projects. We trust our team and give them the freedom to make meaningful decisions.
- Impact-Focused: Your work won’t be buried under bureaucracy. You’ll see it go live and make a difference in days, not quarters
What We Value:
- Craftsmanship over shortcuts: We appreciate engineers who take the time to understand the problem deeply and build durable solutions—not just quick fixes.
- Depth over haste: If you're the kind of person who enjoys going one level deeper to really "get" how something works, you'll thrive here.
- Invested mindset: We're looking for people who don't just punch tickets, but care about the long-term success of the systems they build.
- Curiosity with follow-through: We admire those who take the time to explore and validate new ideas, not just skim the surface.
Compensation:
- INR 6 - 12 LPA
- Performance Bonuses: Linked to contribution, delivery, and impact.

Required Skills:
• Basic understanding of machine learning concepts and algorithms
• Proficiency in Python and relevant libraries (NumPy, Pandas, scikit-learn)
• Familiarity with data preprocessing techniques
• Knowledge of basic statistical concepts
• Understanding of model evaluation metrics
• Basic experience with at least one deep learning framework (TensorFlow, PyTorch)
• Strong analytical and problem-solving abilities
Application Process: Create your profile on our platform, submit your portfolio, GitHub profile, or sample projects.
Job Summary:
We are hiring a Data Scientist – Gen AI with hands-on experience in developing Agentic AI applications using frameworks like LangChain, LangGraph, Semantic Kernel, or Microsoft Copilot. The ideal candidate will be proficient in Python, LLMs, and prompt engineering techniques such as RAG and Chain-of-Thought prompting.
Key Responsibilities:
- Build and deploy Agent AI applications using LLM frameworks.
- Apply advanced prompt engineering (Zero-Shot, Few-Shot, CoT).
- Integrate Retrieval-Augmented Generation (RAG).
- Develop scalable solutions in Python using NumPy, Pandas, TensorFlow/PyTorch.
- Collaborate with teams to deliver business-aligned Gen AI solutions.
Must-Have Skills:
- Experience with LangChain, LangGraph, or similar (priority given).
- Strong understanding of LLMs, RAG, and prompt engineering.
- Proficiency in Python and relevant ML libraries.
Nice-to-Have:
- Wrapper API development for LLMs.
- REST API integration within Agentic workflows.
Qualifications:
- Bachelor’s/Master’s in CS, Data Science, AI, or related.
- 4–7 years in AI/ML/Data Science, with 1–2 years in Gen AI/LLMs.

ROLES AND RESPONSIBILITIES
As a Full Stack Developer at GoQuest Media, you will play a key role in building and maintaining
web applications that deliver seamless user experiences for our global clients. From
brainstorming features with the team to executing back-end logic, you will be involved in every
aspect of our application development process.
You will be working with modern technologies like NodeJS, ReactJS, NextJS, and Tailwind CSS
to create performant, scalable applications. Your role will span both front-end and back-end
development as you build efficient and dynamic solutions to meet the company’s and users’
needs.
What will you be accountable for?
● End-to-End Development:
● Design and develop highly scalable and interactive web applications from scratch.
● Take ownership of both front-end (ReactJS, NextJS, Tailwind CSS) and back-end
(NodeJS) development processes.
● Feature Implementation:
● Work closely with designers and product managers to translate ideas into highly
interactive and responsive interfaces.
● Maintenance and Debugging:
● Ensure applications are optimized for performance, scalability, and reliability.
● Perform regular maintenance, debugging, and testing of existing apps to ensure
they remain in top shape.
● Collaboration:
● Collaborate with cross-functional teams, including designers, product managers,
and stakeholders, to deliver seamless and robust applications.
● Innovation:
● Stay updated with the latest trends and technologies to suggest and implement
improvements in the development process.
Tech Stack
● Front-end: ReactJS, NextJS, Tailwind CSS
● Back-end: NodeJS, ExpressJS
● Database: MongoDB (preferred), MySQL
● Version Control: Git
● Tools: Webpack, Docker (optional but a plus)
Preferred Location
This role is based out of our Andheri Office, Mumbai.
Growth Opportunities for You
● Lead exciting web application projects end-to-end and own key product initiatives.
● Develop cutting-edge apps used by leading media clients around the globe.
● Gain experience working in a high-growth company in the media and tech industry.
● Potential to grow into a team lead role.
Who Should Apply?
● Individuals with a passion for coding and web technologies.
● Minimum 3-5 years of experience in full-stack development using NodeJS, ReactJS,
NextJS, and Tailwind CSS.
● Strong understanding of both front-end and back-end development and ability to
write efficient, reusable, and scalable code.
● Familiarity with databases like MongoDB and MySQL.
● Experience with CI/CD pipelines and cloud infrastructure (AWS, Google Cloud) is a
plus.
● Team players with excellent communication skills and the ability to work in a
fast-paced environment.
Who Should Not Apply?
● If you're not comfortable with both front-end and back-end development.
● If you don’t enjoy problem-solving or tackling complex development challenges.
● If working in a dynamic, evolving environment doesn’t appeal to you.

Job Description: AI/ML Specialist
We are looking for a highly skilled and experienced AI/ML Specialist to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential.
Key Responsibilities
● Develop and maintain web applications using Django and Flask frameworks.
● Design and implement RESTful APIs using Django Rest Framework (DRF).
● Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation.
● Build and integrate APIs for AI/ML models into existing systems.
● Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn.
● Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases.
● Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization.
● Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker.
● Ensure the scalability, performance, and reliability of applications and deployed models.
● Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions.
● Write clean, maintainable, and efficient code following best practices.
● Conduct code reviews and provide constructive feedback to peers.
● Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML.
Required Skills and Qualifications
● Bachelor’s degree in Computer Science, Engineering, or a related field.
● 3+ years of professional experience as a AI/ML Specialist
● Proficient in Python with a strong understanding of its ecosystem.
● Extensive experience with Django and Flask frameworks.
● Hands-on experience with AWS services for application deployment and management.
● Strong knowledge of Django Rest Framework (DRF) for building APIs.
● Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn.
● Experience with transformer architectures for NLP and advanced AI solutions.
● Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB).
● Familiarity with MLOps practices for managing the machine learning lifecycle.
● Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus.
● Excellent problem-solving skills and the ability to work independently and as part of a team.
● Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders.

Job Description:
• Experience in Python (Only Backend), Data structures, Oops, Algorithms, Django, NumPy etc.
• Notice/Joining of not more than 30 days.
• Only Premium Institute- Tier 1 and Tier 2.
• Hybrid Mode of working.
• Good understanding of writing Unit Tests using PYTest.
• Good understanding of parsing XML’s and handling files using Python.
• Good understanding with Databases/SQL, procedures and query tuning.
• Service Design Concepts, OO and Functional Development concepts.
• Agile Development Methodologies.
• Strong oral and written communication skills.
• Excellent interpersonal skills and professional approach Skills desired.

● Proven experience in training, evaluating and deploying machine learning models
● Solid understanding of data science and machine learning concepts
● Experience with some machine learning / data engineering machine learning tech in Python (such as numpy, pytorch, pandas/polars, airflow, etc)
● Experience developing data products using large language model, prompt engineering, model evaluation.
● Experience with web services and programming (such as Python, docker, databases etc.)
● Understanding of some of the following: FastAPI, PostgreSQL, Celery, Docker, AWS, Modal, git, continuous integration.
Job Title: Optimization Scientist – Route & Inventory Optimization (OR & RL)
Location: Hyderabad (On-site)
Experience Required: 1+ year
Company Overview:
We are a leading AI-driven supply chain solutions company focused on transforming retail, FMCG, and logistics through cutting-edge technologies in machine learning, operations research, and reinforcement learning. Our mission is to build intelligent systems that enhance decision-making and automate processes across forecasting, inventory, and transportation.
Internship Overview:
We are seeking a passionate and motivated AI/ML Intern to support the development of intelligent optimization systems for route planning and inventory allocation. You will work alongside experienced scientists and engineers, gaining hands-on experience in applying machine learning, reinforcement learning, and operations research to real-world logistics challenges.
Key Responsibilities:
🔹 Assist in Route Optimization Projects:
- Support in modeling and solving simplified versions of Vehicle Routing Problems (VRP) under guidance.
- Work with Python libraries like Pyomo or OR-Tools to prototype optimization solutions.
- Explore reinforcement learning methods (e.g., DQN, PPO) for dynamic routing decisions under uncertainty.
🔹 Support Inventory Optimization Efforts:
- Learn to model multi-echelon inventory systems using basic OR and simulation techniques.
- Analyze historical data to understand stock levels, service times, and demand variability.
- Help design experiments to evaluate replenishment strategies and stocking policies.
🔹 Contribute to AI-Driven Decision Systems:
- Assist in integrating ML forecasting models with optimization pipelines.
- Participate in the development or testing of simulation environments for training RL agents.
- Collaborate with the team to evaluate model performance using historical or synthetic datasets.
Required Qualifications:
- Currently pursuing or recently completed a degree in Computer Science, Data Science, Operations Research, Industrial Engineering, or related field.
- Good understanding of Python and key libraries (NumPy, Pandas, Matplotlib, Scikit-learn).
- Familiarity with basic optimization concepts (LP/MILP) and libraries like OR-Tools or Gurobi (student license).
- Basic knowledge of reinforcement learning frameworks (OpenAI Gym, Stable-Baselines3) is a plus.
- Strong problem-solving skills and willingness to learn advanced AI/OR techniques.
What You’ll Gain:
- Hands-on exposure to real-world AI and optimization use cases in logistics and supply chain.
- Mentorship from experienced scientists in OR, ML, and RL.
- Experience working in a fast-paced, applied research environment.
- Opportunity to convert to a full-time role based on performance and business needs.
About WINIT:
WINIT is a pioneer in mobile Sales Force Automation (mSFA) with over 25 years of experience. We serve more than 600 global enterprises, helping them enhance efficiency, streamline logistics, and leverage AI/ML to optimize sales operations. With a commitment to innovation and global support, WINIT continues to lead digital transformation in sales.

Job Title : Python Django Developer
Experience : 3+ Years
Location : Gurgaon Sector - 48
Working Days : 6 Days WFO (Monday to Saturday)
Job Summary :
We are looking for a skilled Python Django Developer with strong foundational knowledge in backend development, data structures, and operating system concepts.
The ideal candidate should have experience in Django and PostgreSQL, along with excellent logical thinking and multithreading knowledge.
Main Technical Skills : Python, Django (or Flask), PostgreSQL/MySQL, SQL & NoSQL ORM, Microservice Architecture, Third-party API integrations (e.g., payment gateways, SMS/email APIs), REST API development, JSON/XML, strong knowledge of data structures, multithreading, and OS concepts.
Key Responsibilities :
- Write efficient, reusable, testable, and scalable code using the Django framework
- Develop backend components, server-side logic, and statistical models
- Design and implement high-availability, low-latency applications with robust data protection and security
- Contribute to the development of highly responsive web applications
- Collaborate with cross-functional teams on system design and integration
Mandatory Skills :
- Strong programming skills in Python and Django (or similar frameworks like Flask).
- Proficiency with PostgreSQL / MySQL and experience in writing complex queries.
- Strong understanding of SQL and NoSQL ORM.
- Solid grasp of data structures, multithreading, and operating system concepts.
- Experience with RESTful API development and implementation of API security.
- Knowledge of JSON/XML and their use in data exchange.
Good-to-Have Skills :
- Experience with Redis, MQTT, and message queues like RabbitMQ or Kafka.
- Understanding of microservice architecture and third-party API integrations (e.g., payment gateways, SMS/email APIs).
- Familiarity with MongoDB and other NoSQL databases.
- Exposure to data science libraries such as Pandas, NumPy, Scikit-learn.
- Knowledge in building and integrating statistical learning models.

Job Title : Python Django Developer
Experience : 3+ Years
Location : Gurgaon
Working Days : 6 Days (Monday to Saturday)
Job Summary :
We are looking for a skilled Python Django Developer with strong foundational knowledge in backend development, data structures, and operating system concepts.
The ideal candidate should have experience in Django and PostgreSQL, along with excellent logical thinking and multithreading knowledge.
Technical Skills : Python, Django (or Flask), PostgreSQL/MySQL, SQL & NoSQL ORM, REST API development, JSON/XML, strong knowledge of data structures, multithreading, and OS concepts.
Key Responsibilities :
- Write efficient, reusable, testable, and scalable code using the Django framework.
- Develop backend components, server-side logic, and statistical models.
- Design and implement high-availability, low-latency applications with robust data protection and security.
- Contribute to the development of highly responsive web applications.
- Collaborate with cross-functional teams on system design and integration.
Mandatory Skills :
- Strong programming skills in Python and Django (or similar frameworks like Flask).
- Proficiency with PostgreSQL / MySQL and experience in writing complex queries.
- Strong understanding of SQL and NoSQL ORM.
- Solid grasp of data structures, multithreading, and operating system concepts.
- Experience with RESTful API development and implementation of API security.
- Knowledge of JSON/XML and their use in data exchange.
Good-to-Have Skills :
- Experience with Redis, MQTT, and message queues like RabbitMQ or Kafka
- Understanding of microservice architecture and third-party API integrations (e.g., payment gateways, SMS/email APIs)
- Familiarity with MongoDB and other NoSQL databases
- Exposure to data science libraries such as Pandas, NumPy, Scikit-learn
- Knowledge in building and integrating statistical learning models.


Are you passionate about the power of data and excited to leverage cutting-edge AI/ML to drive business impact? At Poshmark, we tackle complex challenges in personalization, trust & safety, marketing optimization, product experience, and more.
Why Poshmark?
As a leader in Social Commerce, Poshmark offers an unparalleled opportunity to work with extensive multi-platform social and commerce data. With over 130 million users generating billions of daily events and petabytes of rapidly growing data, you’ll be at the forefront of data science innovation. If building impactful, data-driven AI solutions for millions excites you, this is your place.
What You’ll Do
- Drive end-to-end data science initiatives, from ideation to deployment, delivering measurable business impact through projects such as feed personalization, product recommendation systems, and attribute extraction using computer vision.
- Collaborate with cross-functional teams, including ML engineers, product managers, and business stakeholders, to design and deploy high-impact models.
- Develop scalable solutions for key areas like product, marketing, operations, and community functions.
- Own the entire ML Development lifecycle: data exploration, model development, deployment, and performance optimization.
- Apply best practices for managing and maintaining machine learning models in production environments.
- Explore and experiment with emerging AI trends, technologies, and methodologies to keep Poshmark at the cutting edge.
Your Experience & Skills
- Ideal Experience: 6-9 years of building scalable data science solutions in a big data environment. Experience with personalization algorithms, recommendation systems, or user behavior modeling is a big plus.
- Machine Learning Knowledge: Hands-on experience with key ML algorithms, including CNNs, Transformers, and Vision Transformers. Familiarity with Large Language Models (LLMs) and techniques like RAG or PEFT is a bonus.
- Technical Expertise: Proficiency in Python, SQL, and Spark (Scala or PySpark), with hands-on experience in deep learning frameworks like PyTorch or TensorFlow. Familiarity with ML engineering tools like Flask, Docker, and MLOps practices.
- Mathematical Foundations: Solid grasp of linear algebra, statistics, probability, calculus, and A/B testing concepts.
- Collaboration & Communication: Strong problem-solving skills and ability to communicate complex technical ideas to diverse audiences, including executives and engineers.


🚀 Job Title : Python AI/ML Engineer
💼 Experience : 3+ Years
📍 Location : Gurgaon (Work from Office, 5 Days/Week)
📅 Notice Period : Immediate
Summary :
We are looking for a Python AI/ML Engineer with strong experience in developing and deploying machine learning models on Microsoft Azure.
🔧 Responsibilities :
- Build and deploy ML models using Azure ML.
- Develop scalable Python applications with cloud-first design.
- Create data pipelines using Azure Data Factory, Blob Storage & Databricks.
- Optimize performance, fix bugs, and ensure system reliability.
- Collaborate with cross-functional teams to deliver intelligent features.
✅ Requirements :
- 3+ Years of software development experience.
- Strong Python skills; experience with scikit-learn, pandas, NumPy.
- Solid knowledge of SQL and relational databases.
- Hands-on with Azure ML, Data Factory, Blob Storage.
- Familiarity with Git, REST APIs, Docker.

Job Title: AI & ML Developer
Experience: 1+ Years
Location: Hyderabad
Company: VoltusWave Technologies India Private Limited
Job Summary:
We are looking for a passionate and skilled AI & Machine Learning Developer with over 1 year of experience to join our growing team. You will be responsible for developing, implementing, and maintaining ML models and AI-driven applications that solve real-world business problems.
Key Responsibilities:
- Design, build, and deploy machine learning models and AI solutions.
- Work with large datasets to extract meaningful insights and develop algorithms.
- Preprocess, clean, and transform raw data for training and evaluation.
- Collaborate with data scientists, software developers, and product teams to integrate models into applications.
- Monitor and maintain the performance of deployed models.
- Stay updated with the latest developments in AI, ML, and data science.
Required Skills:
- Strong understanding of machine learning algorithms and principles.
- Experience with Python and ML libraries such as scikit-learn, TensorFlow, PyTorch, Keras, etc.
- Familiarity with data processing tools like Pandas, NumPy, etc.
- Basic knowledge of deep learning and neural networks.
- Experience with data visualization tools (e.g., Matplotlib, Seaborn, Plotly).
- Knowledge of model evaluation and optimization techniques.
- Familiarity with version control (Git), Jupyter Notebooks, and cloud environments (AWS, GCP, or Azure) is a plus.
Educational Qualification:
- Bachelor's or Master’s degree in Computer Science, Data Science, AI/ML, or a related field.
Nice to Have:
- Exposure to NLP, Computer Vision, or Time Series Analysis.
- Experience with ML Ops or deployment pipelines.
- Understanding of REST APIs and integration of ML models with web apps.
Why Join Us:
- Work on real-time AI & ML projects.
- Opportunity to learn and grow in a fast-paced, innovative environment.
- Friendly and collaborative team culture.
- Career development support and training.

We are seeking a Data Scientist with strong expertise in data analysis, machine learning, and visualization. The ideal candidate should be proficient in Python, Pandas, and Matplotlib, with experience in building and optimizing data-driven models. Some experience in Natural Language Processing (NLP) and Named Entity Recognition (NER) models would be a plus.
Analyze and process large datasets using Python and Pandas.
Develop and optimize machine learning models for predictive analytics.
Create data visualizations using Matplotlib and Seaborn to support decision-making.
Perform data cleaning, feature engineering, and statistical analysis.
Work with structured and unstructured data to extract meaningful insights.
Implement and fine-tune NER models for specific use cases (if required).
Collaborate with cross-functional teams to drive data-driven solutions.
Required Skills & Qualifications:
Strong proficiency in Python and data science libraries (Pandas, NumPy, Scikit-learn, etc.).
Experience in data analysis, statistical modeling, and machine learning.
Hands-on expertise in data visualization using Matplotlib and Seaborn.
Understanding of SQL and database querying.
Familiarity with NLP techniques and NER models is a plus.
Strong problem-solving and analytical skills.



Dear Professionals!
We are HiringGENAIML Developer!
Key Skills & Qualifications
- Strong proficiency in Python, with a focus on GenAI best practices and frameworks.
- Expertise in machine learning algorithms, data modeling, and model evaluation.
- Experience with NLP techniques, computer vision, or generative AI.
- Deep knowledge of LLMs, prompt engineering, and GenAI technologies.
- Proficiency in data analysis tools like Pandas and NumPy.
- Hands-on experience with vector databases such as Weaviate or Pinecone.
- Familiarity with cloud platforms (AWS, Azure, GCP) for AI deployment.
- Strong problem-solving skills and critical-thinking abilities.
- Experience with AI model fairness, bias detection, and adversarial testing.
- Excellent communication skills to translate business needs into technical solutions.
Preferred Qualifications
- Bachelors or Masters degree in Computer Science, AI, or a related field.
- Experience with MLOps practices for model deployment and maintenance.
- Strong understanding of data pipelines, APIs, and cloud infrastructure.
- Advanced degree in Computer Science, Machine Learning, or a related field (preferred).
JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-Fri, In office role with excellent perks and benefits!
Key Responsibilities:
1. Design, develop, and maintain backend services and APIs using Node.js or Python, or Java.
2. Build and implement scalable and robust microservices and integrate API gateways.
3. Develop and optimize NoSQL database structures and queries (e.g., MongoDB, DynamoDB).
4. Implement real-time data pipelines using Kafka.
5. Collaborate with front-end developers to ensure seamless integration of backend services.
6. Write clean, reusable, and efficient code following best practices, including design patterns.
7. Troubleshoot, debug, and enhance existing systems for improved performance.
Mandatory Skills:
1. Proficiency in at least one backend technology: Node.js or Python, or Java.
2. Strong experience in:
i. Microservices architecture,
ii. API gateways,
iii. NoSQL databases (e.g., MongoDB, DynamoDB),
iv. Kafka
v. Data structures (e.g., arrays, linked lists, trees).
3. Frameworks:
i. If Java : Spring framework for backend development.
ii. If Python: FastAPI/Django frameworks for AI applications.
iii. If Node: Express.js for Node.js development.
Good to Have Skills:
1. Experience with Kubernetes for container orchestration.
2. Familiarity with in-memory databases like Redis or Memcached.
3. Frontend skills: Basic knowledge of HTML, CSS, JavaScript, or frameworks like React.js.



Job Title : Sr. Data Scientist
Experience : 5+ Years
Location : Noida (Hybrid – 3 Days in Office)
Shift Timing : 2 PM to 11 PM
Availability : Immediate
Job Description :
We are seeking a Senior Data Scientist to develop and implement machine learning models, predictive analytics, and data-driven solutions.
The role involves data analysis, dashboard development (Looker Studio), NLP, Generative AI (LLMs, Prompt Engineering), and statistical modeling.
Strong expertise in Python (Pandas, NumPy), Cloud Data Science (AWS SageMaker, Azure OpenAI), Agile (Jira, Confluence), and stakeholder collaboration is essential.
Mandatory skills : Machine Learning, Cloud Data Science (AWS SageMaker, Azure OpenAI), Python (Pandas, NumPy), Data Visualization (Looker Studio), NLP & Generative AI (LLMs, Prompt Engineering), Statistical Modeling, Agile (Jira, Confluence), and strong stakeholder communication.


- Programming Language: Python (Strong knowledge)
- Concurrency & Parallelism: Multithreading, Multiprocessing, AsyncIO, ThreadPoolExecutor, Future, concurrent.futures
- Memory Management: Reference Counting, Global Interpreter Lock (GIL)
- Distributed Computing: Dask, Apache Spark (Preferred)
- Data Processing: NumPy
- Inter-Service Communication: GRPC, REST API
- Containerization & Orchestration: Docker, Kubernetes
- Software Development Practices: Code Optimization, Debugging, Performance Tuning
- Communication & Problem-Solving: Technical Documentation, Team Collaboration, Asking for Clarity When Needed
Skills And Expertise
- Python,
- Multithreading,
- Multiprocessing,
- Dask, Apache Spark,
- NumPy,
- REST API,
- Docker,
- Kubernetes,
- Code Optimization

We are seeking a talented UiPath Developer with experience in Python, SQL, Pandas, and NumPy to join our dynamic team. The ideal candidate will have hands-on experience developing RPA workflows using UiPath, along with the ability to automate processes through scripting, data manipulation, and database queries.
This role offers the opportunity to collaborate with cross-functional teams to streamline operations and build innovative automation solutions.
Key Responsibilities:
- Design, develop, and implement RPA workflows using UiPath.
- Build and maintain Python scripts to enhance automation capabilities.
- Utilize Pandas and NumPy for data extraction, manipulation, and transformation within automation processes.
- Write optimized SQL queries to interact with databases and support automation workflows.
Skills and Qualifications:
- 2 to 5 years of experience in UiPath development.
- Strong proficiency in Python and working knowledge of Pandas and NumPy.
- Good experience with SQL for database interactions.
- Ability to design scalable and maintainable RPA solutions using UiPath.
Job Title: (Generative AI Engineer Specialist in Deep Learning)
Location: Gandhinagar, Ahmedabad, Gujarat
Company: Rayvat Outsourcing
Salary: Upto 2,50,000/- per annum
Job Type: Full-Time
Experience: 0 to 1 Year
Job Overview:
We are seeking a talented and enthusiastic Generative AI Engineer to join our team. As an Intermediate-level engineer, you will be responsible for developing and deploying state-of-the-art generative AI models to solve complex problems and create innovative solutions. You will collaborate with cross-functional teams, working on a variety of projects that range from natural language processing (NLP) to image generation and multimodal AI systems. The ideal candidate has hands-on experience with machine learning models, deep learning techniques, and a passion for artificial intelligence.
Key Responsibilities:
· Develop, fine-tune, and deploy generative AI models using frameworks such as GPT, BERT, DALL·E, Stable Diffusion, etc.
· Research and implement cutting-edge machine learning algorithms in NLP, computer vision, and multimodal systems.
· Collaborate with data scientists, ML engineers, and product teams to integrate AI solutions into products and platforms.
· Create APIs and pipelines to deploy models in production environments, ensuring scalability and performance.
· Analyze large datasets to identify key features, patterns, and use cases for model training.
· Debug and improve existing models by evaluating performance metrics and applying optimization techniques.
· Stay up-to-date with the latest advancements in AI, deep learning, and generative models to continually enhance the solutions.
· Document technical workflows, including model architecture, training processes, and performance reports.
· Ensure ethical use of AI, adhering to guidelines around AI fairness, transparency, and privacy.
Qualifications:
· Bachelor’s/Master’s degree in Computer Science, Machine Learning, Data Science, or a related field.
· 2-4 years of hands-on experience in machine learning and AI development, particularly in generative AI.
· Proficiency with deep learning frameworks such as TensorFlow, PyTorch, or similar.
· Experience with NLP models (e.g., GPT, BERT) or image-generation models (e.g., GANs, diffusion models).
· Strong knowledge of Python and libraries like NumPy, Pandas, scikit-learn, etc.
· Experience with cloud platforms (e.g., AWS, GCP, Azure) for AI model deployment and scaling.
· Familiarity with APIs, RESTful services, and microservice architectures.
· Strong problem-solving skills and the ability to troubleshoot and optimize AI models.
· Good understanding of data preprocessing, feature engineering, and handling large datasets.
· Excellent written and verbal communication skills, with the ability to explain complex concepts clearly.
Preferred Skills:
· Experience with multimodal AI systems (combining text, image, and/or audio data).
· Familiarity with ML Ops and CI/CD pipelines for deploying machine learning models.
· Experience in A/B testing and performance monitoring of AI models in production.
· Knowledge of ethical AI principles and AI governance.
What We Offer:
· Competitive salary and benefits package.
· Opportunities for professional development and growth in the rapidly evolving AI field.
· Collaborative and dynamic work environment, with access to cutting-edge AI technologies.
· Work on impactful projects with real-world applications.

Role Overview:
We are seeking a highly skilled and motivated Data Scientist to join our growing team. The ideal candidate will be responsible for developing and deploying machine learning models from scratch to production level, focusing on building robust data-driven products. You will work closely with software engineers, product managers, and other stakeholders to ensure our AI-driven solutions meet the needs of our users and align with the company's strategic goals.
Key Responsibilities:
- Develop, implement, and optimize machine learning models and algorithms to support product development.
- Work on the end-to-end lifecycle of data science projects, including data collection, preprocessing, model training, evaluation, and deployment.
- Collaborate with cross-functional teams to define data requirements and product taxonomy.
- Design and build scalable data pipelines and systems to support real-time data processing and analysis.
- Ensure the accuracy and quality of data used for modeling and analytics.
- Monitor and evaluate the performance of deployed models, making necessary adjustments to maintain optimal results.
- Implement best practices for data governance, privacy, and security.
- Document processes, methodologies, and technical solutions to maintain transparency and reproducibility.
Qualifications:
- Bachelor's or Master's degree in Data Science, Computer Science, Engineering, or a related field.
- 5+ years of experience in data science, machine learning, or a related field, with a track record of developing and deploying products from scratch to production.
- Strong programming skills in Python and experience with data analysis and machine learning libraries (e.g., Pandas, NumPy, TensorFlow, PyTorch).
- Experience with cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker).
- Proficiency in building and optimizing data pipelines, ETL processes, and data storage solutions.
- Hands-on experience with data visualization tools and techniques.
- Strong understanding of statistics, data analysis, and machine learning concepts.
- Excellent problem-solving skills and attention to detail.
- Ability to work collaboratively in a fast-paced, dynamic environment.
Preferred Qualifications:
- Knowledge of microservices architecture and RESTful APIs.
- Familiarity with Agile development methodologies.
- Experience in building taxonomy for data products.
- Strong communication skills and the ability to explain complex technical concepts to non-technical stakeholders.


Job Description: AI/ML Engineer
Location: Bangalore (On-site)
Experience: 2+ years of relevant experience
About the Role:
We are seeking a skilled and passionate AI/ML Engineer to join our team in Bangalore. The ideal candidate will have over two years of experience in developing, deploying, and maintaining AI and machine learning models. As an AI/ML Engineer, you will work closely with our data science team to build innovative solutions and deploy them in a production environmen
Key Responsibilities:
- Develop, implement, and optimize machine learning models.
- Perform data manipulation, exploration, and analysis to derive actionable insights.
- Use advanced computer vision techniques, including YOLO and other state-of-the-art methods, for image processing and analysis.
- Collaborate with software developers and data scientists to integrate AI/ML solutions into the company's applications and products.
- Design, test, and deploy scalable machine learning solutions using TensorFlow, OpenCV, and other related technologies.
- Ensure the efficient storage and retrieval of data using SQL and data manipulation libraries such as pandas and NumPy.
- Contribute to the development of backend services using Flask or Django for deploying AI models.
- Manage code using Git and containerize applications using Docker when necessary.
- Stay updated with the latest advancements in AI/ML and integrate them into existing projects.
Required Skills:
- Proficiency in Python and its associated libraries (NumPy, pandas).
- Hands-on experience with TensorFlow for building and training machine learning models.
- Strong knowledge of linear algebra and data augmentation techniques.
- Experience with computer vision libraries like OpenCV and frameworks like YOLO.
- Proficiency in SQL for database management and data extraction.
- Experience with Flask for backend development.
- Familiarity with version control using Git.
Optional Skills:
- Experience with PyTorch, Scikit-learn, and Docker.
- Familiarity with Django for web development.
- Knowledge of GPU programming using CuPy and CUDA.
- Understanding of parallel processing techniques.
Qualifications:
- Bachelor's degree in Computer Science, Engineering, or a related field.
- Demonstrated experience in AI/ML, with a portfolio of past projects.
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork skills.
Why Join Us?
- Opportunity to work on cutting-edge AI/ML projects.
- Collaborative and dynamic work environment.
- Competitive salary and benefits.
- Professional growth and development opportunities.
If you're excited about using AI/ML to solve real-world problems and have a strong technical background, we'd love to hear from you!
Apply now to join our growing team and make a significant impact!

Who are we looking for?
We are looking for a Senior Data Scientist, who will design and develop data-driven solutions using state-of-the-art methods. You should be someone with strong and proven experience in working on data-driven solutions. If you feel you’re enthusiastic about transforming business requirements into insightful data-driven solutions, you are welcome to join our fast-growing team to unlock your best potential.
Job Summary
- Supporting company mission by understanding complex business problems through data-driven solutions.
- Designing and developing machine learning pipelines in Python and deploying them in AWS/GCP, ...
- Developing end-to-end ML production-ready solutions and visualizations.
- Analyse large sets of time-series industrial data from various sources, such as production systems, sensors, and databases to draw actionable insights and present them via custom dashboards.
- Communicating complex technical concepts and findings to non-technical stakeholders of the projects
- Implementing the prototypes using suitable statistical tools and artificial intelligence algorithms.
- Preparing high-quality research papers and participating in conferences to present and report experimental results and research findings.
- Carrying out research collaborating with internal and external teams and facilitating review of ML systems for innovative ideas to prototype new models.
Qualification and experience
- B.Tech/Masters/Ph.D. in computer science, electrical engineering, mathematics, data science, and related fields.
- 5+ years of professional experience in the field of machine learning, and data science.
- Experience with large-scale Time-series data-based production code development is a plus.
Skills and competencies
- Familiarity with Docker, and ML Libraries like PyTorch, sklearn, pandas, SQL, and Git is a must.
- Ability to work on multiple projects. Must have strong design and implementation skills.
- Ability to conduct research based on complex business problems.
- Strong presentation skills and the ability to collaborate in a multi-disciplinary team.
- Must have programming experience in Python.
- Excellent English communication skills, both written and verbal.
Benefits and Perks
- Culture of innovation, creativity, learning, and even failure, we believe in bringing out the best in you.
- Progressive leave policy for effective work-life balance.
- Get mentored by highly qualified internal resource groups and opportunity to avail industry-driven mentorship program, as we believe in empowering people.
- Multicultural peer groups and supportive workplace policies.
- Work from beaches, hills, mountains, and many more with the yearly workcation program; we believe in mixing elements of vacation and work.
Hiring Process
- Call with Talent Acquisition Team: After application screening, a first-level screening with the talent acquisition team to understand the candidate's goals and alignment with the job requirements.
- First Round: Technical round 1 to gauge your domain knowledge and functional expertise.
- Second Round: In-depth technical round and discussion about the departmental goals, your role, and expectations.
- Final HR Round: Culture fit round and compensation discussions.
- Offer: Congratulations you made it!
If this position sparked your interest, apply now to initiate the screening process.


Job Description:-
Designation : Python Developer
Location : Indore | WFO
Skills : Python, Django, Flask, Numpy, Panda, RESTful APIs, AWS.
Python Developer Responsibilities:-
1. Coordinating with development teams to determine application requirements.
2. Writing scalable code using Python programming language.
3. Testing and debugging applications.
4. Developing back-end components.
5. Integrating user-facing elements using server-side logic.
6. Assessing and prioritizing client feature requests.
7. Integrating data storage solutions.
8. Coordinating with front-end developers.
9. Reprogramming existing databases to improve functionality.
10. Developing digital tools to monitor online traffic.
Python Developer Requirements:-
1. Bachelor's degree in computer science, computer engineering, or related field.
2. At Least 3+ years of experience as a Python developer.
3. Expert knowledge of Python and related frameworks including Django and Flask.
4. A deep understanding and multi-process architecture and the threading limitations of Python.
5. Familiarity with server-side templating languages including Jinja 2 and Mako.
6. Ability to integrate multiple data sources into a single system.
7. Familiarity with testing tools.
8. Ability to collaborate on projects and work independently when required.
Skills - Python, Django, Flask, Numpy, Panda, RESTful APIs, AWS.




CTC Budget: 35-55LPA
Location: Hyderabad (Remote after 3 months WFO)
Company Overview:
An 8-year-old IT Services and consulting company based in Hyderabad providing services in maximizing product value while delivering rapid incremental innovation, possessing extensive SaaS company M&A experience including 20+ closed transactions on both the buy and sell sides. They have over 100 employees and looking to grow the team.
- 6 plus years of experience as a Python developer.
- Experience in web development using Python and Django Framework.
- Experience in Data Analysis and Data Science using Pandas, Numpy and Scifi-Kit - (GTH)
- Experience in developing User Interface using HTML, JavaScript, CSS.
- Experience in server-side templating languages including Jinja 2 and Mako
- Knowledge into Kafka and RabitMQ (GTH)
- Experience into Docker, Git and AWS
- Ability to integrate multiple data sources into a single system.
- Ability to collaborate on projects and work independently when required.
- DB (MySQL, Postgress, SQL)
Selection Process: 2-3 Interview rounds (Tech, VP, Client)


From building entire infrastructures or platforms to solving complex IT challenges, Cambridge Technology helps businesses accelerate their digital transformation and become AI-first businesses. With over 20 years of expertise as a technology services company, we enable our customers to stay ahead of the curve by helping them figure out the perfect approach, solutions, and ecosystem for their business. Our experts help customers leverage the right AI, big data, cloud solutions, and intelligent platforms that will help them become and stay relevant in a rapidly changing world.
No Of Positions: 1
Skills required:
- The ideal candidate will have a bachelor’s degree in data science, statistics, or a related discipline with 4-6 years of experience, or a master’s degree with 4-6 years of experience. A strong candidate will also possess many of the following characteristics:
- Strong problem-solving skills with an emphasis on achieving proof-of-concept
- Knowledge of statistical techniques and concepts (regression, statistical tests, etc.)
- Knowledge of machine learning and deep learning fundamentals
- Experience with Python implementations to build ML and deep learning algorithms (e.g., pandas, numpy, sci-kit-learn, Stats Models, Keras, PyTorch, etc.)
- Experience writing and debugging code in an IDE
- Experience using managed web services (e.g., AWS, GCP, etc.)
- Strong analytical and communication skills
- Curiosity, flexibility, creativity, and a strong tolerance for ambiguity
- Ability to learn new tools from documentation and internet resources.
Roles and responsibilities :
- You will work on a small, core team alongside other engineers and business leaders throughout Cambridge with the following responsibilities:
- Collaborate with client-facing teams to design and build operational AI solutions for client engagements.
- Identify relevant data sources for data wrangling and EDA
- Identify model architectures to use for client business needs.
- Build full-stack data science solutions up to MVP that can be deployed into existing client business processes or scaled up based on clear documentation.
- Present findings to teammates and key stakeholders in a clear and repeatable manner.
Experience :
2 - 14 Yrs


· 4+ years of experience as a Python Developer.
· Good Understanding of Object-Oriented Concepts and Solid principles.
· Good Understanding in Programming and analytical skills.
· Should have hands on experience in AWS Cloud Service like S3, Lambda functions Knowledge. (Must Have)
· Should have experience Working with large datasets (Must Have)
· Proficient in using NumPy, Pandas. (Must Have)
· Should have hands on experience on Mysql (Must Have)
· Should have experience in debugging Python applications (Must have)
· Knowledge of working on Flask.
· Knowledge of object-relational mapping (ORM).
· Able to integrate multiple data sources and databases into one system
· Proficient understanding of code versioning tools such as Git, SVN
· Strong at problem-solving and logical abilities
· Sound knowledge of Front-end technologies like HTML5, CSS3, and JavaScript
· Strong commitment and desire to learn and grow.


Data Scientist
Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.
What you’ll do?
- Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models.
- Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation, and serving.
- Research new and innovative machine learning approaches.
- Perform hands-on analysis and modeling of enormous data sets to develop insights that increase Ad Traffic and Campaign Efficacy.
- Collaborate with other data scientists, data engineers, product managers, and business stakeholders to build well-crafted, pragmatic data products.
- Actively take on new projects and constantly try to improve the existing models and infrastructure necessary for offline and online experimentation and iteration.
- Work with your team on ambiguous problem areas in existing or new ML initiatives
What are we looking for?
- Ability to write a SQL query to pull the data you need.
- Fluency in Python and familiarity with its scientific stack such as numpy, pandas, scikit learn, matplotlib.
- Experience in Tensorflow and/or R Modelling and/or PyTorch
- Ability to understand a business problem and translate, and structure it into a data science problem.
Job Category: Data Science
Job Type: Full Time
Job Location: Bangalore
Experienced in writing complex SQL select queries (window functions & CTE’s) with advanced SQL experience
Should be an individual contributor for initial few months based on project movement team will be aligned
Strong in querying logic and data interpretation
Solid communication and articulation skills
Able to handle stakeholders independently with less interventions of reporting manager
Develop strategies to solve problems in logical yet creative ways
Create custom reports and presentations accompanied by strong data visualization and storytelling



Job Description – Data Science
Basic Qualification:
- ME/MS from premier institute with a background in Mechanical/Industrial/Chemical/Materials engineering.
- Strong Analytical skills and application of Statistical techniques to problem solving
- Expertise in algorithms, data structures and performance optimization techniques
- Proven track record of demonstrating end to end ownership involving taking an idea from incubator to market
- Minimum years of experience in data analysis (2+), statistical analysis, data mining, algorithms for optimization.
Responsibilities
The Data Engineer/Analyst will
- Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions.
- Clear interaction with Business teams including product planning, sales, marketing, finance for defining the projects, objectives.
- Mine and analyze data from company databases to drive optimization and improvement of product and process development, marketing techniques and business strategies
- Coordinate with different R&D and Business teams to implement models and monitor outcomes.
- Mentor team members towards developing quick solutions for business impact.
- Skilled at all stages of the analysis process including defining key business questions, recommending measures, data sources, methodology and study design, dataset creation, analysis execution, interpretation and presentation and publication of results.
- 4+ years’ experience in MNC environment with projects involving ML, DL and/or DS
- Experience in Machine Learning, Data Mining or Machine Intelligence (Artificial Intelligence)
- Knowledge on Microsoft Azure will be desired.
- Expertise in machine learning such as Classification, Data/Text Mining, NLP, Image Processing, Decision Trees, Random Forest, Neural Networks, Deep Learning Algorithms
- Proficient in Python and its various libraries such as Numpy, MatPlotLib, Pandas
- Superior verbal and written communication skills, ability to convey rigorous mathematical concepts and considerations to Business Teams.
- Experience in infra development / building platforms is highly desired.
- A drive to learn and master new technologies and techniques.


Outplay is building the future of sales engagement, a solution that helps sales teams personalize at scale while consistently staying on message and on task, through true multi-channel outreach including email, phone, SMS, chat and social media. Outplay is the only tool your sales team will ever need to crush their goals. Funded by Sequoia - Headquartered in the US. Sequoia not only led a $2 million seed round in Outplay early this year, but also followed with $7.3 million Series - A recently. The team is spread remotely all over the globe.
Perks of being an Outplayer :
• Fully remote job - You can be on the mountains or at the beach, and still work with us. Outplay is a 100% remote company.
• Flexible work hours - We believe mental health is way more important than a 9-5 job.
• Health Insurance - We are a family, and we take care of each other - we provide medical insurance coverage to all employees and their family members. We also provide an additional benefit of doctor consultation along with the insurance plan.
• Annual company retreat - we work hard, and we party harder.
• Best tools - we buy you the best tools of the trade
• Celebrations - No, we never forget your birthday or anniversary (be it work or wedding) and we never leave an opportunity to celebrate milestones and wins.
• Safe space to innovate and experiment
• Steady career growth and job security
About the Role:
We are looking for a Senior Data Scientist to help research, develop and advance the charter of AI at Outplay and push the threshold of conversational intelligence.
Job description :
• Lead AI initiatives that dissects data for creating new feature prototypes and minimum viable products
• Conduct product research in natural language processing, conversation intelligence, and virtual assistant technologies
• Use independent judgment to enhance product by using existing data and building AI/ML models
• Collaborate with teams, provide technical guidance to colleagues and come up with new ideas for rapid prototyping. Convert prototypes into scalable and efficient products.
• Work closely with multiple teams on projects using textual and voice data to build conversational intelligence
• Prototype and demonstrate AI augmented capabilities in the product for customers
• Conduct experiments to assess the precision and recall of language processing modules and study the effect of such experiments on different application areas of sales
• Assist business development teams in the expansion and enhancement of a feature pipeline to support short and long-range growth plans
• Identify new business opportunities and prioritize pursuits of AI for different areas of conversational intelligence
• Build reusable and scalable solutions for use across a varied customer base
• Participate in long range strategic planning activities designed to meet the company’s objectives and revenue goals
Required Skills :
• Bachelors or Masters in a quantitative field such as Computer Science, Statistics, Mathematics, Operations Research or related field with focus on applied Machine Learning, AI, NLP and data-driven statistical analysis & modelling.
• 4+ years of experience applying AI/ML/NLP/Deep Learning/ data-driven statistical analysis & modelling solutions to multiple domains. Experience in the Sales and Marketing domain is a plus.
• Experience in building Natural Language Processing (NLP), Conversational Intelligence, and Virtual Assistants based features.
• Excellent grasp on programming languages like Python. Experience in GoLang would be a plus.
• Proficient in analysis using python packages like Pandas, Plotly, Numpy, Scipy, etc.
• Strong and proven programming skills in machine learning and deep learning with experience in frameworks such as TensorFlow/Keras, Pytorch, Transformers, Spark etc
• Excellent communication skills to explain complex solutions to stakeholders across multiple disciplines.
• Experience in SQL, RDBMS, Data Management and Cloud Computing (AWS and/or Azure) is a plus.
• Extensive experience of training and deploying different Machine Learning models
• Experience in monitoring deployed models to proactively capture data drifts, low performing models, etc.
• Exposure to Deep Learning, Neural Networks or related fields
• Passion for solving AI/ML problems for both textual and voice data.
• Fast learner, with great written and verbal communication skills, and be able to work independently as
well as in a team environment
- Experience and expertise in Python Development and its different libraries like Pyspark, pandas, NumPy
- Expertise in ADF, Databricks.
- Creating and maintaining data interfaces across a number of different protocols (file, API.).
- Creating and maintaining internal business process solutions to keep our corporate system data in sync and reduce manual processes where appropriate.
- Creating and maintaining monitoring and alerting workflows to improve system transparency.
- Facilitate the development of our Azure cloud infrastructure relative to Data and Application systems.
- Design and lead development of our data infrastructure including data warehouses, data marts, and operational data stores.
- Experience in using Azure services such as ADLS Gen 2, Azure Functions, Azure messaging services, Azure SQL Server, Azure KeyVault, Azure Cognitive services etc.
What we look for:
We are looking for an associate who will be doing data crunching from various sources and finding the key points from the data. Also help us to improve/build new pipelines as per the requests. Also, this associate will be helping us to visualize the data if required and find flaws in our existing algorithms.
Responsibilities:
- Work with multiple stakeholders to gather the requirements of data or analysis and take action on them.
- Write new data pipelines and maintain the existing pipelines.
- Person will be gathering data from various DB’s and will be finding the required metrics out of it.
Required Skills:
- Experience with python and Libraries like Pandas,and Numpy.
- Experience in SQL and understanding of NoSQL DB’s.
- Hands-on experience in Data engineering.
- Must have good analytical skills and knowledge of statistics.
- Understanding of Data Science concepts.
- Bachelor degree in Computer Science or related field.
- Problem-solving skills and ability to work under pressure.
Nice to have:
- Experience in MongoDB or any NoSql DB.
- Experience in ElasticSearch.
- Knowledge of Tableau, Power BI or any other visualization tool.



Do you want to help build real technology for a meaningful purpose? Do you want to contribute to making the world more sustainable, advanced and accomplished extraordinary precision in Analytics?
What is your role?
As a Computer Vision & Machine Learning Engineer at Datasee.AI, you’ll be core to the development of our robotic harvesting system’s visual intelligence. You’ll bring deep computer vision, machine learning, and software expertise while also thriving in a fast-paced, flexible, and energized startup environment. As an early team member, you’ll directly build our success, growth, and culture. You’ll hold a significant role and are excited to grow your role as Datasee.AI grows.
What you’ll do
- You will be working with the core R&D team which drives the computer vision and image processing development.
- Build deep learning model for our data and object detection on large scale images.
- Design and implement real-time algorithms for object detection, classification, tracking, and segmentation
- Coordinate and communicate within computer vision, software, and hardware teams to design and execute commercial engineering solutions.
- Automate the workflow process between the fast-paced data delivery systems.
What we are looking for
- 1 to 3+ years of professional experience in computer vision and machine learning.
- Extensive use of Python
- Experience in python libraries such as OpenCV, Tensorflow and Numpy
- Familiarity with a deep learning library such as Keras and PyTorch
- Worked on different CNN architectures such as FCN, R-CNN, Fast R-CNN and YOLO
- Experienced in hyperparameter tuning, data augmentation, data wrangling, model optimization and model deployment
- B.E./M.E/M.Sc. Computer Science/Engineering or relevant degree
- Dockerization, AWS modules and Production level modelling
- Basic knowledge of the Fundamentals of GIS would be added advantage
Prefered Requirements
- Experience with Qt, Desktop application development, Desktop Automation
- Knowledge on Satellite image processing, Geo-Information System, GDAL, Qgis and ArcGIS
About Datasee.AI:
Datasee>AI, Inc. is an AI driven Image Analytics company offering Asset Management solutions for industries in the sectors of Renewable Energy, Infrastructure, Utilities & Agriculture. With core expertise in Image processing, Computer Vision & Machine Learning, Takvaviya’s solution provides value across the enterprise for all the stakeholders through a data driven approach.
With Sales & Operations based out of US, Europe & India, Datasee.AI is a team of 32 people located across different geographies and with varied domain expertise and interests.
A focused and happy bunch of people who take tasks head-on and build scalable platforms and products.

A Reputed Analytics Consulting Company in Data Science field



Job Title : Analyst / Sr. Analyst – Data Science Developer - Python
Exp : 2 to 5 yrs
Loc : B’lore / Hyd / Chennai
NP: Candidate should join us in 2 months (Max) / Immediate Joiners Pref.
About the role:
We are looking for an Analyst / Senior Analyst who works in the analytics domain with a strong python background.
Desired Skills, Competencies & Experience:
• • 2-4 years of experience in working in the analytics domain with a strong python background. • • Visualization skills in python with plotly, matplotlib, seaborn etc. Ability to create customized plots using such tools. • • Ability to write effective, scalable and modular code. Should be able to understand, test and debug existing python project modules quickly and contribute to that. • • Should be familiarized with Git workflows.
Good to Have: • • Familiarity with cloud platforms like AWS, AzureML, Databricks, GCP etc. • • Understanding of shell scripting, python package development. • • Experienced with Python data science packages like Pandas, numpy, sklearn etc. • • ML model building and evaluation experience using sklearn.
|

- Writing efficient, reusable, testable, and scalable code
- Understanding, analyzing, and implementing – Business needs, feature modification requests, conversion into software components
- Integration of user-oriented elements into different applications, data storage solutions
- Developing – Backend components to enhance performance and receptiveness, server-side logic, and platform, statistical learning models, highly responsive web applications
- Designing and implementing – High availability and low latency applications, data protection and security features
- Performance tuning and automation of application
- Working with Python libraries like Pandas, NumPy, etc.
- Creating predictive models for AI and ML-based features
- Keeping abreast with the latest technology and trends
- Fine-tune and develop AI/ML-based algorithms based on results
Technical Skills-
Good proficiency in,
- Python frameworks like Django, etc.
- Web frameworks and RESTful APIs
- Core Python fundamentals and programming
- Code packaging, release, and deployment
- Database knowledge
- Circles, conditional and control statements
- Object-relational mapping
- Code versioning tools like Git, Bitbucket
Fundamental understanding of,
- Front-end technologies like JS, CSS3 and HTML5
- AI, ML, Deep Learning, Version Control, Neural networking
- Data visualization, statistics, data analytics
- Design principles that are executable for a scalable app
- Creating predictive models
- Libraries like Tensorflow, Scikit-learn, etc
- Multi-process architecture
- Basic knowledge about Object Relational Mapper libraries
- Ability to integrate databases and various data sources into a unified system
- Basic knowledge about Object Relational Mapper libraries
- Ability to integrate databases and various data sources into a unified system


Key skills : Python, Numpy, Panda, SQL, ETL
Roles and Responsibilities:
- The work will involve the development of workflows triggered by events from other systems
- Design, develop, test, and deliver software solutions in the FX Derivatives group
- Analyse requirements for the solutions they deliver, to ensure that they provide the right solution
- Develop easy to use documentation for the frameworks and tools developed for adaption by other teams
Familiarity with event-driven programming in Python
- Must have unit testing and debugging skills
- Good problem solving and analytical skills
- Python packages such as NumPy, Scikit learn
- Testing and debugging applications.
- Developing back-end components.


Key Skills Required :
- Proficiency in Python 3.x based web and backend development
- Solid understanding of Python concepts
- Strong experience in building web applications using Django
- Experience building REST APIs using DRF or Flask
- Experience with some form of Machine Learning (ML)
- Experience in using libraries such as Numpy and Pandas
- Hands on experience with RDBMS such as Postgres or MySQL including querying
- Comfort with Git repositories, branching and deployment using Git
- Working experience with Docker
- Basic working knowledge of ReactJs
- Experience in deploying Django applications to AWS,Digital Ocean or Heroku
Responsibilities :
- Understanding requirement and congributing to engineering solutions at a conceptual stage to provide the best possible solution to the task/challenge
- Building high quality code using coding standards based on the SRS/Documentation
- Building component based, maintainable, scalable and reusable backend libraries/modules.
- Building & documenting scalable APIs on the Open Spec standard
- Unit testing development modules and APIs
- Conducting code reviews to ensure that the highest quality standard are maintained
- Securing backend applications and APIs using industry best practices
- Troubleshooting issues and fixing bugs raised by the QA team efficiently.
- Optimizing code
- Building and deploying the applications

Job Description
JD - Python Developer
Responsibilities
- Design and implement software features based on requirements
- Architect new features for products or tools
- Articulate and document designs as needed
- Prepare and present technical training
- Provide estimates and status for development tasks
- Work effectively in a highly collaborative and iterative development process
- Work effectively with the Product, QA, and DevOps team.
- Troubleshoot issues and correct defects when required
- Build unit and integration tests that assure correct behavior and increase the maintainability of the code base
- Apply dev-ops and automation as needed
- Commit to continuous learning and enhancement of skills and product knowledge
Required Qualifications
- Minimum of 5 years of relevant experience in development and design
- Proficiency in Python and extensive knowledge of the associated libraries Extensive experience with Python data science libraries: TensorFlow, NumPy, SciPy, Pandas, etc.
- Strong skills in producing visuals with algorithm results
- Strong SQL and working knowledge of Microsoft SQL Server and other data storage technologies
- Strong web development skills Advance knowledge with ORM and data access patterns
- Experienced working using Scrum and Agile methodologies
- Excellent debugging and troubleshooting skills
- Deep knowledge of DevOps practices and cloud services
- Strong collaboration and verbal and written communication skills
- Self-starter, detail-oriented, organized, and thorough
- Strong interpersonal skills and a team-oriented mindset
- Fast learner and creative capacity for developing innovative solutions to complex problems
Skills
PYTHON, SQL, TensorFlow, NumPy, SciPy, Pandas


Develop state of the art algorithms in the fields of Computer Vision, Machine Learning and Deep Learning.
Provide software specifications and production code on time to meet project milestones Qualifications
BE or Master with 3+ years of experience
Must have Prior knowledge and experience in Image processing and Video processing • Should have knowledge of object detection and recognition
Must have experience in feature extraction, segmentation and classification of the image
Face detection, alignment, recognition, tracking & attribute recognition
Excellent Understanding and project/job experience in Machine learning, particularly in areas of Deep Learning – CNN, RNN, TENSORFLOW, KERAS etc.
Real world expertise in deep learning- applied to Computer Vision problems • Strong foundation in Mathematics
Strong development skills in Python
Must have worked upon Vision and deep learning libraries and frameworks such as Opencv, Tensorflow, Pytorch, keras
Quick learner of new technologies
Ability to work independently as well as part of a team
Knowledge of working closely with Version Control(GIT)


Positions : 2-3
CTC Offering : 40,000 to 55,000/month
Job Location: Remote for 6-12 months due to the pandemic, then Mumbai, Maharashtra
Required experience:
Minimum 1.5 to 2 years of experience in Web & Backend Development using Python and Django with experience in some form of Machine Learning ML Algorithms
Overview
We are looking for Python developers with a strong understanding of object orientation and experience in web and backend development. Experience with Analytical algorithms and mathematical calculations using libraries such as Numpy and Pandas are a must. Experience in some form of Machine Learning. We require candidates who have working experience using Django Framework and DRF
Key Skills required (Items in Bold are mandatory keywords) :
1. Proficiency in Python 3.x based web and backend development
2. Solid understanding of Python concepts
3. Strong experience in building web applications using Django
4. Experience building REST APIs using DRF or Flask
5. Experience with some form of Machine Learning (ML)
6. Experience in using libraries such as Numpy and Pandas
7. Some form of experience with NLP and Deep Learning using any of Pytorch, Tensorflow, Keras, Scikit-learn or similar
8. Hands on experience with RDBMS such as Postgres or MySQL
9. Comfort with Git repositories, branching and deployment using Git
10. Working experience with Docker
11. Basic working knowledge of ReactJs
12. Experience in deploying Django applications to AWS,Digital Ocean or Heroku
KRAs includes :
1. Understanding the scope of work
2. Understanding and adopting the current internal development workflow and processes
3. Understanding client requirements as communicated by the project manager
4. Arriving on timelines for projects, either independently or as a part of a team
5. Executing projects either independently or as a part of a team
6. Developing products and projects using Python
7. Writing code to collect and mathematically analyse large volumes of data.
8. Creating backend modules in Python by building or reutilizing existing modules in a manner so as to provide optimal deliveries on time
9. Writing Scalable, maintainable code
10. Building secured REST APIs
11. Setting up batch task processing environments using Celery
12. Unit testing prepared modules
13. Bug fixing issues as reported by the QA team
14. Optimization and performance tuning of code
Bonus but not mandatory
1. Nodejs
2. Redis
3. PHP
4. CI/CD
5. AWS
- Does analytics to extract insights from raw historical data of the organization.
- Generates usable training dataset for any/all MV projects with the help of Annotators, if needed.
- Analyses user trends, and identifies their biggest bottlenecks in Hammoq Workflow.
- Tests the short/long term impact of productized MV models on those trends.
- Skills - Numpy, Pandas, SPARK, APACHE SPARK, PYSPARK, ETL mandatory.

Position description:
- Architecture & Design systems for Predictive analysis and writing algorithms to deal with financial data
- Must have experience on web services and APIs (REST, JSON, and similar) and creation and consumption of RESTful APIs
- Proficiently writing algorithms with Python/Pandas/Numpy; Jupyter/PyCharm
- Experience with relational and NoSQL databases (Eg. MSSQL, MongoDB, Redshift, PostgreSQL, Redis)
- Implementing Machine Learning Models using Python/R for best performance
- Working with Time Series Data & analyzing large data sets.
- Implementing financial strategies in python and generating reports to analyze the strategy results.
Primary Responsibilities:
- Writing algorithms to deal with financial data and Implementing financial strategies in (Python, SQL) and generating reports to analyze the strategy results.
Educational qualifications preferred Degree: Bachelors degree
Required Knowledge:
- Highly skilled in SQL, Python, Pandas, Numpy, Machine Learning, Predictive Modelling, Algorithm designing, OOPS concepts
- 2 - 7 years Full-Time working experience on core SQL, Python role (Non-Support)
- Bachelor’s Degree in Engineering, equivalent or higher education.
- Writing algorithms to deal with financial data and Implementing financial strategies in (Python, SQL) and generating reports to analyze the strategy results.




Job Description:
We are looking for an exceptional Data Scientist Lead / Manager who is passionate about data and motivated to build large scale machine learning solutions to shine our data products. This person will be contributing to the analytics of data for insight discovery and development of machine learning pipeline to support modeling of terabytes of daily data for various use cases.
Location: Pune (Initially remote due to COVID 19)
*****Looking for someone who can start immediately / Within a month. Hands-on experience in Python programming (Minimum 5 Years) is a must.
About the Organisation :
- It provides a dynamic, fun workplace filled with passionate individuals. We are at the cutting edge of advertising technology and there is never a dull moment at work.
- We have a truly global footprint, with our headquarters in Singapore and offices in Australia, United States, Germany, United Kingdom and India.
- You will gain work experience in a global environment. We speak over 20 different languages, from more than 16 different nationalities and over 42% of our staff are multilingual.
Qualifications:
• 8+ years relevant working experience
• Master / Bachelors in computer science or engineering
• Working knowledge of Python and SQL
• Experience in time series data, data manipulation, analytics, and visualization
• Experience working with large-scale data
• Proficiency of various ML algorithms for supervised and unsupervised learning
• Experience working in Agile/Lean model
• Experience with Java and Golang is a plus
• Experience with BI toolkit such as Tableau, Superset, Quicksight, etc is a plus
• Exposure to building large-scale ML models using one or more of modern tools and libraries such as AWS Sagemaker, Spark ML-Lib, Dask, Tensorflow, PyTorch, Keras, GCP ML Stack
• Exposure to modern Big Data tech such as Cassandra/Scylla, Kafka, Ceph, Hadoop, Spark
• Exposure to IAAS platforms such as AWS, GCP, Azure
Typical persona: Data Science Manager/Architect
Experience: 8+ years programming/engineering experience (with at least last 4 years in Data science in a Product development company)
Type: Hands-on candidate only
Must:
a. Hands-on Python: pandas,scikit-learn
b. Working knowledge of Kafka
c. Able to carry out own tasks and help the team in resolving problems - logical or technical (25% of job)
d. Good on analytical & debugging skills
e. Strong communication skills
Desired (in order of priorities)
a.Go (Strong advantage)
b. Airflow (Strong advantage)
c. Familiarity & working experience on more than one type of database: relational, object, columnar, graph and other unstructured databases
d. Data structures, Algorithms
e. Experience with multi-threaded and thread sync concepts
f. AWS Sagemaker
g. Keras