Cutshort logo

50+ pandas Jobs in India

Apply to 50+ pandas Jobs on CutShort.io. Find your next job, effortlessly. Browse pandas Jobs and apply today!

icon
ChicMic Studios
Akanksha Mittal
Posted by Akanksha Mittal
Mohali
3 - 8 yrs
₹8L - ₹22L / yr
Natural Language Processing (NLP)
Named-entity recognition
Data-flow analysis
Data Visualization
NumPy
+1 more

We are seeking a Data Scientist with strong expertise in data analysis, machine learning, and visualization. The ideal candidate should be proficient in Python, Pandas, and Matplotlib, with experience in building and optimizing data-driven models. Some experience in Natural Language Processing (NLP) and Named Entity Recognition (NER) models would be a plus.


Analyze and process large datasets using Python and Pandas.

Develop and optimize machine learning models for predictive analytics.

Create data visualizations using Matplotlib and Seaborn to support decision-making.

Perform data cleaning, feature engineering, and statistical analysis.

Work with structured and unstructured data to extract meaningful insights.

Implement and fine-tune NER models for specific use cases (if required).

Collaborate with cross-functional teams to drive data-driven solutions.


Required Skills & Qualifications:

Strong proficiency in Python and data science libraries (Pandas, NumPy, Scikit-learn, etc.).

Experience in data analysis, statistical modeling, and machine learning.

Hands-on expertise in data visualization using Matplotlib and Seaborn.

Understanding of SQL and database querying.

Familiarity with NLP techniques and NER models is a plus.

Strong problem-solving and analytical skills.

Read more
Boston Technology Corporation
Arungouda S D
Posted by Arungouda S D
Bengaluru (Bangalore)
4 - 6 yrs
₹10L - ₹18L / yr
skill iconPython
skill iconDjango
skill iconFlask
pytest
SQL
+9 more

Job Description :


We are seeking a highly skilled and motivated Python Developer with 4 to 6 years of experience to join our dynamic development team. The ideal candidate will have expertise in Python programming and be proficient in building scalable, secure, and efficient applications. The role involves collaborating with cross-functional teams to design, develop, and maintain software solutions.


The core responsibilities for the job include the following :


1.Application Development :


- Write clean, efficient, and reusable Python code.


- Develop scalable backend solutions and RESTful APIs.


- Optimize applications for maximum speed and scalability.


2.Integration and Database Management :


- Integrate data storage solutions such as SQL, PostgreSQL, or NoSQL databases (e. g., MongoDB).


- Work with third-party APIs and libraries to enhance application functionality.


3.Collaboration and Problem-Solving :


- Collaborate with front-end developers, designers, and project managers.


- Debug, troubleshoot, and resolve application issues promptly.


4.Code Quality and Documentation :


- Adhere to coding standards and best practices.


- Write comprehensive technical documentation and unit tests.


5.Innovation and Optimization :


- Research and implement new technologies and frameworks to improve software performance.


- Identify bottlenecks and devise solutions to optimize performance.


6.Requirements :


- Strong programming skills in Python with 4-6 years of hands-on experience.


- Proficiency in at least one Python web framework (e. g., Django, Flask, FastAPI).


- Experience with RESTful API development and integration.


- Knowledge of database design and management using SQL (MySQL, PostgreSQL) and NoSQL (MongoDB).


- Familiarity with cloud platforms (e. g., AWS, Azure, or Google Cloud) and containerization tools like Docker.


- Experience with version control systems like Git.


- Strong understanding of software development lifecycle (SDLC) and Agile methodologies.


- Knowledge of front-end technologies (e. g., HTML, CSS, JavaScript) is a plus.


- Experience with testing frameworks like Pytest or Unittest.


- Working knowledge of Java is a plus.


- Bachelors or Masters degree in Computer Science, Engineering, or a related field.


7.Preferred Skills :


- Knowledge of data processing libraries such as Pandas or NumPy.


- Experience with machine learning frameworks like TensorFlow or PyTorch (optional but a plus).


- Familiarity with CI/CD pipelines and deployment practices.


- Experience in message brokers like RabbitMQ or Kafka.


8.Soft Skills :


- Excellent problem-solving skills and attention to detail.


- Strong communication and teamwork abilities.


- Ability to manage multiple tasks and meet deadlines in a fast-paced environment.


- Willingness to learn and adapt to new technologies.

Read more
Moative

at Moative

3 candid answers
Eman Khan
Posted by Eman Khan
Chennai
1 - 5 yrs
₹15L - ₹30L / yr
NumPy
pandas
Scikit-Learn
Natural Language Toolkit (NLTK)
skill iconMachine Learning (ML)
+7 more

About Moative

Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.


Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs. 


Role

We seek skilled and experienced data science/machine learning professionals with a strong background in at least one of mathematics, financial engineering, and electrical engineering, to join our Energy & Utilities team. If you are interested in artificial intelligence, excited about solving real business problems in the energy and utilities industry, and keen to contribute to impactful projects, this role is for you!


Work you’ll do

As a data scientist in the energy and utilities industry, you will perform quantitative analysis and build mathematical models to forecast energy demand, supply and strategies of efficient load balancing. You will work on models for short term and long term pricing, improving operational efficiency, reducing costs, and ensuring reliable power supply. You’ll work closely with cross-functional teams to deploy these models in solutions that provide insights/ solutions to real-world business problems. You will also be involved in conducting experiments, building POCs and prototypes.


Responsibilities

  • Develop and implement quantitative models for load forecasting, energy production and distribution optimization.
  • Analyze historical data to identify and predict extreme events, and measure impact of extreme events. Enhance existing pricing and risk management frameworks.
  • Develop and implement quantitative models for energy pricing and risk management. Monitor market conditions and adjust models as needed to ensure accuracy and effectiveness.
  • Collaborate with engineering and operations teams to provide quantitative support for energy projects. Enhance existing energy management systems and develop new strategies for energy conservation.
  • Maintain and improve quantitative tools and software used in energy management.
  • Support end-to-end ML/ AI model lifecycle - from data preparation, data analysis and feature engineering to model development, validation and deployment
  • Collaborate with domain experts, engineers, and stakeholders in translating business problems into data-driven solutions
  • Document methodologies and results, present findings and communicate insights to non-technical audiences


Skills & Requirements

  • Strong background in mathematics, econometrics, electrical engineering, or a related eld.
  • Experience data analysis, and quantitative modeling using programming languages such as Python or R.
  • Excellent analytical and problem-solving skills.
  • Strong understanding and experience with data analysis, statistical and mathematical concepts and ML algorithms
  • Proficiency in Python and familiarity with basic Python libraries for data analysis and ML algorithms (such as NumPy, Pandas, ScikitLearn, NLTK).
  • Strong communication skills
  • Strong collaboration skills, ability to work with engineering and operations teams.
  • A continuous learning attitude and a problem solving mind-set

Good to have -

  • Knowledge of energy markets, regulations, and utility operation.
  • Working knowledge of cloud platforms (e.g., AWS, Azure, GCP).
  • Broad understanding of data structures and data engineering.


Working at Moative

Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less. Here are some of our guiding principles:


  • Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
  • Own the canvas. Throw yourself in to build, x or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
  • Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
  • Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that loud.
  • High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.


If this role and our work is of interest to you, please apply here. We encourage you to apply even if you believe you do not meet all the requirements listed above.


That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.


The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.

Read more
Jio Tesseract
TARUN MISHRA
Posted by TARUN MISHRA
Bengaluru (Bangalore), Pune, Hyderabad, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Mumbai, Navi Mumbai, Kolkata, Rajasthan
5 - 24 yrs
₹9L - ₹70L / yr
skill iconC
skill iconC++
Visual C++
Embedded C++
Artificial Intelligence (AI)
+32 more

JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.


Mon-fri role, In office, with excellent perks and benefits!


Position Overview

We are seeking a Software Architect to lead the design and development of high-performance robotics and AI software stacks utilizing NVIDIA technologies. This role will focus on defining scalable, modular, and efficient architectures for robot perception, planning, simulation, and embedded AI applications. You will collaborate with cross-functional teams to build next-generation autonomous systems 9


Key Responsibilities:

1. System Architecture & Design

● Define scalable software architectures for robotics perception, navigation, and AI-driven decision-making.

● Design modular and reusable frameworks that leverage NVIDIA’s Jetson, Isaac ROS, Omniverse, and CUDA ecosystems.

● Establish best practices for real-time computing, GPU acceleration, and edge AI inference.


2. Perception & AI Integration

● Architect sensor fusion pipelines using LIDAR, cameras, IMUs, and radar with DeepStream, TensorRT, and ROS2.

● Optimize computer vision, SLAM, and deep learning models for edge deployment on Jetson Orin and Xavier.

● Ensure efficient GPU-accelerated AI inference for real-time robotics applications.


3. Embedded & Real-Time Systems

● Design high-performance embedded software stacks for real-time robotic control and autonomy.

● Utilize NVIDIA CUDA, cuDNN, and TensorRT to accelerate AI model execution on Jetson platforms.

● Develop robust middleware frameworks to support real-time robotics applications in ROS2 and Isaac SDK.


4. Robotics Simulation & Digital Twins

● Define architectures for robotic simulation environments using NVIDIA Isaac Sim & Omniverse.

● Leverage synthetic data generation (Omniverse Replicator) for training AI models.

● Optimize sim-to-real transfer learning for AI-driven robotic behaviors.


5. Navigation & Motion Planning

● Architect GPU-accelerated motion planning and SLAM pipelines for autonomous robots.

● Optimize path planning, localization, and multi-agent coordination using Isaac ROS Navigation.

● Implement reinforcement learning-based policies using Isaac Gym.


6. Performance Optimization & Scalability

● Ensure low-latency AI inference and real-time execution of robotics applications.

● Optimize CUDA kernels and parallel processing pipelines for NVIDIA hardware.

● Develop benchmarking and profiling tools to measure software performance on edge AI devices.


Required Qualifications:

● Master’s or Ph.D. in Computer Science, Robotics, AI, or Embedded Systems.

● Extensive experience (7+ years) in software development, with at least 3-5 years focused on architecture and system design, especially for robotics or embedded systems.

● Expertise in CUDA, TensorRT, DeepStream, PyTorch, TensorFlow, and ROS2.

● Experience in NVIDIA Jetson platforms, Isaac SDK, and GPU-accelerated AI.

● Proficiency in programming languages such as C++, Python, or similar, with deep understanding of low-level and high-level design principles.

● Strong background in robotic perception, planning, and real-time control.

● Experience with cloud-edge AI deployment and scalable architectures.


Preferred Qualifications

● Hands-on experience with NVIDIA DRIVE, NVIDIA Omniverse, and Isaac Gym

● Knowledge of robot kinematics, control systems, and reinforcement learning

● Expertise in distributed computing, containerization (Docker), and cloud robotics

● Familiarity with automotive, industrial automation, or warehouse robotics

● Experience designing architectures for autonomous systems or multi-robot systems.

● Familiarity with cloud-based solutions, edge computing, or distributed computing for robotics

● Experience with microservices or service-oriented architecture (SOA)

● Knowledge of machine learning and AI integration within robotic systems

● Knowledge of testing on edge devices with HIL and simulations (Isaac Sim, Gazebo, V-REP etc.)

Read more
Jio Tesseract
TARUN MISHRA
Posted by TARUN MISHRA
Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Bengaluru (Bangalore), Pune, Hyderabad, Mumbai, Navi Mumbai
5 - 40 yrs
₹8.5L - ₹75L / yr
Microservices
Architecture
API
NOSQL Databases
skill iconMongoDB
+33 more

JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.


Mon-Fri, In office role with excellent perks and benefits!


Key Responsibilities:

1. Design, develop, and maintain backend services and APIs using Node.js or Python, or Java.

2. Build and implement scalable and robust microservices and integrate API gateways.

3. Develop and optimize NoSQL database structures and queries (e.g., MongoDB, DynamoDB).

4. Implement real-time data pipelines using Kafka.

5. Collaborate with front-end developers to ensure seamless integration of backend services.

6. Write clean, reusable, and efficient code following best practices, including design patterns.

7. Troubleshoot, debug, and enhance existing systems for improved performance.


Mandatory Skills:

1. Proficiency in at least one backend technology: Node.js or Python, or Java.


2. Strong experience in:

i. Microservices architecture,

ii. API gateways,

iii. NoSQL databases (e.g., MongoDB, DynamoDB),

iv. Kafka

v. Data structures (e.g., arrays, linked lists, trees).


3. Frameworks:

i. If Java : Spring framework for backend development.

ii. If Python: FastAPI/Django frameworks for AI applications.

iii. If Node: Express.js for Node.js development.


Good to Have Skills:

1. Experience with Kubernetes for container orchestration.

2. Familiarity with in-memory databases like Redis or Memcached.

3. Frontend skills: Basic knowledge of HTML, CSS, JavaScript, or frameworks like React.js.

Read more
Tecblic Private LImited
Priya Khatri
Posted by Priya Khatri
Ahmedabad
4 - 5 yrs
₹11L - ₹15L / yr
Large Language Models (LLM)
Natural Language Processing (NLP)
Artificial Intelligence (AI)
skill iconDeep Learning
skill iconMachine Learning (ML)
+8 more

Job Description: Machine Learning Engineer – LLM and Agentic AI

Location: Ahmedabad

Experience: 4+ years

Employment Type: Full-Time

________________________________________

About Us

Join a forward-thinking team at Tecblic, where innovation meets cutting-edge technology. We specialize in delivering AI-driven solutions that empower businesses to thrive in the digital age. If you're passionate about LLMs, machine learning, and pushing the boundaries of Agentic AI, we’d love to have you on board.

________________________________________

Key Responsibilities

• Research and Development: Research, design, and fine-tune machine learning models, with a focus on Large Language Models (LLMs) and Agentic AI systems.

• Model Optimization: Fine-tune and optimize pre-trained LLMs for domain-specific use cases, ensuring scalability and performance.

• Integration: Collaborate with software engineers and product teams to integrate AI models into customer-facing applications and platforms.

• Data Engineering: Perform data preprocessing, pipeline creation, feature engineering, and exploratory data analysis (EDA) to prepare datasets for training and evaluation.

• Production Deployment: Design and implement robust model deployment pipelines, including monitoring and managing model performance in production.

• Experimentation: Prototype innovative solutions leveraging cutting-edge techniques like reinforcement learning, few-shot learning, and generative AI.

• Technical Mentorship: Mentor junior team members on best practices in machine learning and software engineering.

________________________________________

Requirements

Core Technical Skills:

• Proficiency in Python for machine learning and data science tasks.

• Expertise in ML frameworks and libraries like PyTorch, TensorFlow, Hugging Face, Scikit-learn, or similar.

• Solid understanding of Large Language Models (LLMs) such as GPT, T5, BERT, or Bloom, including fine-tuning techniques.

• Experience working on NLP tasks such as text classification, entity recognition, summarization, or question answering.

• Knowledge of deep learning architectures, such as transformers, RNNs, and CNNs.

• Strong skills in data manipulation using tools like Pandas, NumPy, and SQL.

• Familiarity with cloud services like AWS, GCP, or Azure, and experience deploying ML models using tools like Docker, Kubernetes, or serverless functions.

Additional Skills (Good to Have):

• Exposure to Agentic AI (e.g., autonomous agents, decision-making systems) and practical implementation.

• Understanding of MLOps tools (e.g., MLflow, Kubeflow) to streamline workflows and ensure production reliability.

• Experience with generative AI models (GANs, VAEs) and reinforcement learning techniques.

• Hands-on experience in prompt engineering and few-shot/fine-tuned approaches for LLMs.

• Familiarity with vector databases like Pinecone, Weaviate, or FAISS for efficient model retrieval.

• Version control (Git) and familiarity with collaborative development practices.

General Skills:

• Strong analytical and mathematical background, including proficiency in linear algebra, statistics, and probability.

• Solid understanding of algorithms and data structures to solve complex ML problems.

• Ability to handle and process large datasets using distributed frameworks like Apache Spark or Dask (optional but useful).

________________________________________

Soft Skills:

• Excellent problem-solving and critical-thinking abilities.

• Strong communication and collaboration skills to work with cross-functional teams.

• Self-motivated, with a continuous learning mindset to keep up with emerging technologies.

Read more
ChicMic Studios
Isha Rana
Posted by Isha Rana
Mohali
3 - 7 yrs
₹5L - ₹12L / yr
skill iconData Analytics
skill iconMachine Learning (ML)
Data Visualization
pandas
skill iconPython
+3 more

We are seeking a Data Scientist with strong expertise in data analysis, machine learning, and visualization. The ideal candidate should be proficient in Python, Pandas, and Matplotlib, with experience in building and optimizing data-driven models. Some experience in Natural Language Processing (NLP) and Named Entity Recognition (NER) models would be a plus.

Analyze and process large datasets using Python and Pandas.

Develop and optimize machine learning models for predictive analytics.

Create data visualizations using Matplotlib and Seaborn to support decision-making.

Perform data cleaning, feature engineering, and statistical analysis.

Work with structured and unstructured data to extract meaningful insights.

Implement and fine-tune NER models for specific use cases (if required).

Collaborate with cross-functional teams to drive data-driven solutions.

Required Skills & Qualifications:

Strong proficiency in Python and data science libraries (Pandas, NumPy, Scikit-learn, etc.).

Experience in data analysis, statistical modeling, and machine learning.

Hands-on expertise in data visualization using Matplotlib and Seaborn.

Understanding of SQL and database querying.

Familiarity with NLP techniques and NER models is a plus.

Strong problem-solving and analytical skills.

Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Noida
4 - 8 yrs
₹2L - ₹10L / yr
skill iconMachine Learning (ML)
skill iconData Science
Azure OpenAI
skill iconPython
pandas
+11 more

Job Title : Sr. Data Scientist

Experience : 5+ Years

Location : Noida (Hybrid – 3 Days in Office)

Shift Timing : 2 PM to 11 PM

Availability : Immediate


Job Description :

We are seeking a Senior Data Scientist to develop and implement machine learning models, predictive analytics, and data-driven solutions.

The role involves data analysis, dashboard development (Looker Studio), NLP, Generative AI (LLMs, Prompt Engineering), and statistical modeling.

Strong expertise in Python (Pandas, NumPy), Cloud Data Science (AWS SageMaker, Azure OpenAI), Agile (Jira, Confluence), and stakeholder collaboration is essential.


Mandatory skills : Machine Learning, Cloud Data Science (AWS SageMaker, Azure OpenAI), Python (Pandas, NumPy), Data Visualization (Looker Studio), NLP & Generative AI (LLMs, Prompt Engineering), Statistical Modeling, Agile (Jira, Confluence), and strong stakeholder communication.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Seema Srivastava
Posted by Seema Srivastava
Mumbai, Bengaluru (Bangalore)
5 - 14 yrs
Best in industry
skill iconPython
skill iconAmazon Web Services (AWS)
SQL
pandas
Amazon Redshift

Job Description: 

Please find below details:


Experience - 5+ Years

Location- Bangalore/Python


Role Overview

We are seeking a skilled Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes.

 

Key Responsibilities

  • Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis.
  • Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses).
  • Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.
  • Monitor, troubleshoot, and enhance data workflows for performance and cost optimization.
  • Ensure data quality and consistency by implementing validation and governance practices.
  • Work on data security best practices in compliance with organizational policies and regulations.
  • Automate repetitive data engineering tasks using Python scripts and frameworks.
  • Leverage CI/CD pipelines for deployment of data workflows on AWS.

 

Required Skills and Qualifications

  • Professional Experience: 5+ years of experience in data engineering or a related field.
  • Programming: Strong proficiency in Python, with experience in libraries like pandaspySpark, or boto3.
  • AWS Expertise: Hands-on experience with core AWS services for data engineering, such as:
  • AWS Glue for ETL/ELT.
  • S3 for storage.
  • Redshift or Athena for data warehousing and querying.
  • Lambda for serverless compute.
  • Kinesis or SNS/SQS for data streaming.
  • IAM Roles for security.
  • Databases: Proficiency in SQL and experience with relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., DynamoDB) databases.
  • Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus.
  • DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline.
  • Version Control: Proficient with Git-based workflows.
  • Problem Solving: Excellent analytical and debugging skills.

 

Optional Skills

  • Knowledge of data modeling and data warehouse design principles.
  • Experience with data visualization tools (e.g., Tableau, Power BI).
  • Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes).
  • Exposure to other programming languages like Scala or Java.






Read more
Client located in pune location

Client located in pune location

Agency job
Remote, Pune
3 - 5 yrs
₹10L - ₹15L / yr
skill iconPython
Linux/Unix
skill iconAmazon Web Services (AWS)
Windows Azure
Large Language Models (LLM) tuning
+4 more

position: Data Scientist

Job Category: Embedded HW_SW

Job Type: Full Time

Job Location: Pune

Experience: 3 - 5 years

Notice period: 0-30 days

Must have skills: Python, Linux-Ubuntu based OS, cloud-based platforms

Education Required: Bachelor’s / Masters / PhD:

Bachelor’s or master’s in computer science, Statistics, Mathematics, Data Science, Engineering

Bachelors with 5 years or Masters with 3 years

Mandatory Skills

  • Bachelor’s or master’s in computer science, Statistics, Mathematics, Data Science, Engineering, or related field
  • 3-5 years of experience as a data scientist, with a strong foundation in machine learning fundamentals (e.g., supervised and unsupervised learning, neural networks)
  • Experience with Python programming language (including libraries such as NumPy, pandas, scikit-learn) is essential
  • Deep hands-on experience building computer vision and anomaly detection systems involving algorithm development in fields such as image-segmentation
  • Some experience with open-source OCR models
  • Proficiency in working with large datasets and experience with feature engineering techniques is a plus

Key Responsibilities

  • Work closely with the AI team to help build complex algorithms that provide unique insights into our data using images.
  • Use agile software development processes to make iterative improvements to our back-end systems.
  • Stay up to date with the latest developments in machine learning and data science, exploring new techniques and tools to apply within Customer’s business context.

Optional Skills

  • Experience working with cloud-based platforms (e.g., Azure, AWS, GCP)
  • Knowledge of computer vision techniques and experience with libraries like OpenCV
  • Excellent Communication skills, especially for explaining technical concepts to nontechnical business leaders.
  • Ability to work on a dynamic, research-oriented team that has concurrent projects.
  • Working knowledge of Git/version control.
  • Expertise in PyTorch, Tensor Flow, Keras.
  • Excellent coding skills, especially in Python.
  • Experience with Linux-Ubuntu based OS


Read more
Codezen Tech Solutions

at Codezen Tech Solutions

1 recruiter
Noorun Rehmani
Posted by Noorun Rehmani
Mumbai, Navi Mumbai
3 - 5 yrs
₹7L - ₹13L / yr
skill iconPython
skill iconDjango
skill iconFlask
skill iconGit
Celery
+6 more

Responsibilities and Duties:

  • Expert in Python, with knowledge of Python web framework - Django
  • Familiarity with backend automated testing tools and frameworks
  • Experience with backend/API development
  • Architect, develop and maintain backend libraries/codebase, database & server.
  • Develop object-oriented software, with mastery of one or more relevant languages (Django).
  • Evaluate competitive and innovative products and design approaches to identify best practices and encourage innovation.
  • Understanding the requirement of a client, document the scope and chart out a plan of implementing the scope
  • Work with design team to give inputs related to the wire-frames and then the design along with incorporating Client Feedback
  • Explore the difference between B2B and B2C projects before implementing the code
  • Work in teams of 2-3 on various projects as per the requirement using git as version control
  • Having good knowledge of APIs creation and database architecture
  • Good Grasp in respective technology (Django)
  • Documenting the process and main functions along the developing process
  • Design and develop highly scalable, highly available, reliable, secure, and fault-tolerant systems with minimal guidance for one of the fastest-growing companies in India.

Required Experience, Skills and Qualifications:

  • 3-5 years of experience required
  • Strong hand on Django-Python
  • Excellent knowledge of using the Git version control system and deployment via Git.
  • You have creative visualization, critical thinking, deductive & pragmatic reasoning and can think out-of-the-box
  • Ability to quickly adapt & independently work in a fast-paced Agile environment with minimum supervision.
  • A self-starter with demonstrated ability to take initiative, who can proactively identify issues/opportunities and recommend action.
Read more
Experiencecom
Remote only
7 - 12 yrs
₹20L - ₹35L / yr
Google Cloud Platform (GCP)
Big Data
skill iconPython
SQL
pandas
+3 more

Description


Come Join Us


Experience.com - We make every experience matter more

Position: Senior GCP Data Engineer

Job Location: Chennai (Base Location) / Remote

Employment Type: Full Time


Summary of Position

A Senior Data Engineer is a professional who specializes in preparing big data infrastructure for analytical or operational uses. He/She is responsible for develops and maintains scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity. They collaborate with data scientists and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organisation.


Responsibilities:

  • Collaborate with cross-functional teams to define, prioritize, and execute data engineering initiatives aligned with business objectives.
  • Design and implement scalable, reliable, and secure data solutions by industry best practices and compliance requirements.
  • Drive the adoption of cloud-native technologies and architectural patterns to optimize the performance, cost, and reliability of data pipelines and analytics solutions.
  • Mentor and lead a team of Data Engineers.
  • Demonstrate a drive to learn and master new technologies and techniques.
  • Apply strong problem-solving skills with an emphasis on building data-driven or AI-enhanced products.
  • Coordinate with ML/AI and engineering teams to understand data requirements.


Experience & Skills:

  • 8+ years of Strong experience in ETL and ELT data from various sources in Data Warehouses
  • 8+ years of experience in Python, Pandas, Numpy, and SciPy.
  • 5+ years of Experience in GCP 
  • 5+ years of Experience in BigQuery, PySpark, and Pub/Sub
  • 5+ years of Experience working with and creating data architectures.
  • Certified in Google Cloud Professional Data Engineer.
  • Advanced proficiency in Google Cloud services such as Dataflow, Dataproc, Dataprep, Data Studio, and Cloud Composer.
  • Proficient in writing complex Spark (PySpark) User Defined Functions (UDFs), Spark SQL, and HiveQL.
  • Good understanding of Elastic search.
  • Experience in assessing and ensuring data quality, data testing, and addressing data quality issues.
  • Excellent understanding of Spark architecture and underlying frameworks including storage management.
  • Solid background in database design and development, database administration, and software engineering across full life cycles.
  • Experience with NoSQL data stores like MongoDB, DocumentDB, and DynamoDB.
  • Knowledge of data governance principles and practices, including data lineage, metadata management, and access control mechanisms.
  • Experience in implementing and optimizing data security controls, encryption, and compliance measures in GCP environments.
  • Ability to troubleshoot complex issues, perform root cause analysis, and implement effective solutions in a timely manner.
  • Proficiency in data visualization tools such as Tableau, Looker, or Data Studio to create insightful dashboards and reports for business users.
  • Strong communication and interpersonal skills to effectively collaborate with technical and non-technical stakeholders, articulate complex concepts, and drive consensus.
  • Experience with agile methodologies and project management tools like Jira or Asana for sprint planning, backlog grooming, and task tracking.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Tony Tom
Posted by Tony Tom
Bengaluru (Bangalore)
2 - 5 yrs
Best in industry
uipath
skill iconPython
NumPy
pandas

We are seeking a talented UiPath Developer with experience in Python, SQL, Pandas, and NumPy to join our dynamic team. The ideal candidate will have hands-on experience developing RPA workflows using UiPath, along with the ability to automate processes through scripting, data manipulation, and database queries.

This role offers the opportunity to collaborate with cross-functional teams to streamline operations and build innovative automation solutions.

Key Responsibilities:

  • Design, develop, and implement RPA workflows using UiPath.
  • Build and maintain Python scripts to enhance automation capabilities.
  • Utilize Pandas and NumPy for data extraction, manipulation, and transformation within automation processes.
  • Write optimized SQL queries to interact with databases and support automation workflows.

Skills and Qualifications:

  • 2 to 5 years of experience in UiPath development.
  • Strong proficiency in Python and working knowledge of Pandas and NumPy.
  • Good experience with SQL for database interactions.
  • Ability to design scalable and maintainable RPA solutions using UiPath.
Read more
Gandhinagar
0 - 2 yrs
₹1L - ₹2.5L / yr
NumPy
pandas
matplotlib
Audio

Job Title: (Generative AI Engineer Specialist in Deep Learning)


Location: Gandhinagar, Ahmedabad, Gujarat

Company: Rayvat Outsourcing

Salary: Upto 2,50,000/- per annum


Job Type: Full-Time


Experience: 0 to 1 Year


Job Overview:


We are seeking a talented and enthusiastic Generative AI Engineer to join our team. As an Intermediate-level engineer, you will be responsible for developing and deploying state-of-the-art generative AI models to solve complex problems and create innovative solutions. You will collaborate with cross-functional teams, working on a variety of projects that range from natural language processing (NLP) to image generation and multimodal AI systems. The ideal candidate has hands-on experience with machine learning models, deep learning techniques, and a passion for artificial intelligence.


Key Responsibilities:


·        Develop, fine-tune, and deploy generative AI models using frameworks such as GPT, BERT, DALL·E, Stable Diffusion, etc.

·        Research and implement cutting-edge machine learning algorithms in NLP, computer vision, and multimodal systems.

·        Collaborate with data scientists, ML engineers, and product teams to integrate AI solutions into products and platforms.

·        Create APIs and pipelines to deploy models in production environments, ensuring scalability and performance.

·        Analyze large datasets to identify key features, patterns, and use cases for model training.

·        Debug and improve existing models by evaluating performance metrics and applying optimization techniques.

·        Stay up-to-date with the latest advancements in AI, deep learning, and generative models to continually enhance the solutions.

·        Document technical workflows, including model architecture, training processes, and performance reports.

·        Ensure ethical use of AI, adhering to guidelines around AI fairness, transparency, and privacy.


Qualifications:


·        Bachelor’s/Master’s degree in Computer Science, Machine Learning, Data Science, or a related field.

·        2-4 years of hands-on experience in machine learning and AI development, particularly in generative AI.

·        Proficiency with deep learning frameworks such as TensorFlow, PyTorch, or similar.

·        Experience with NLP models (e.g., GPT, BERT) or image-generation models (e.g., GANs, diffusion models).

·        Strong knowledge of Python and libraries like NumPy, Pandas, scikit-learn, etc.

·        Experience with cloud platforms (e.g., AWS, GCP, Azure) for AI model deployment and scaling.

·        Familiarity with APIs, RESTful services, and microservice architectures.

·        Strong problem-solving skills and the ability to troubleshoot and optimize AI models.

·        Good understanding of data preprocessing, feature engineering, and handling large datasets.

·        Excellent written and verbal communication skills, with the ability to explain complex concepts clearly.


Preferred Skills:


·        Experience with multimodal AI systems (combining text, image, and/or audio data).

·        Familiarity with ML Ops and CI/CD pipelines for deploying machine learning models.

·        Experience in A/B testing and performance monitoring of AI models in production.

·        Knowledge of ethical AI principles and AI governance.


What We Offer:


·        Competitive salary and benefits package.

·        Opportunities for professional development and growth in the rapidly evolving AI field.

·        Collaborative and dynamic work environment, with access to cutting-edge AI technologies.

·        Work on impactful projects with real-world applications.

Read more
Remote only
4 - 5 yrs
₹9.6L - ₹12L / yr
SQL
RESTful APIs
skill iconPython
pandas
ETL

We are seeking a Data Engineer ( Snowflake, Bigquery, Redshift) to join our team. In this role, you will be responsible for the development and maintenance of fault-tolerant pipelines, including multiple database systems.


Responsibilities:

  • Collaborate with engineering teams to create REST API-based pipelines for large-scale MarTech systems, optimizing for performance and reliability.
  • Develop comprehensive data quality testing procedures to ensure the integrity and accuracy of data across all pipelines.
  • Build scalable dbt models and configuration files, leveraging best practices for efficient data transformation and analysis.
  • Partner with lead data engineers in designing scalable data models.
  • Conduct thorough debugging and root cause analysis for complex data pipeline issues, implementing effective solutions and optimizations.
  • Follow and adhere to group's standards such as SLAs, code styles, and deployment processes.
  • Anticipate breaking changes to implement backwards compatibility strategies regarding API schema changesAssist the team in monitoring pipeline health via observability tools and metrics.
  • Participate in refactoring efforts as platform application needs evolve over time.


Requirements:

  • Bachelor's degree or higher in Computer Science, Engineering, Mathematics, or a related field.
  • 3+ years of professional experience with a cloud database such as Snowflake, Bigquery, Redshift.
  • +1 years of professional experience with dbt (cloud or core).
  • Exposure to various data processing technologies such as OLAP and OLTP and their applications in real-world scenarios.
  • Exposure to work cross-functionally with other teams such as Product, Customer Success, Platform Engineering.
  • Familiarity with orchestration tools such as Dagster/Airflow.
  • Familiarity with ETL/ELT tools such as dltHub/Meltano/Airbyte/Fivetran and DBT.
  • High intermediate to advanced SQL skills (comfort with CTEs, window functions).
  • Proficiency with Python and related libraries (e.g., pandas, sqlalchemy, psycopg2) for data manipulation, analysis, and automation.


Benefits:

  • Work Location: Remote
  • 5 days working​


You can apply directly through the link:https://zrec.in/e9578?source=CareerSite


Explore our Career Page for more such jobs : careers.infraveo.com



Read more
Someshwara Software
Chandana Kandukur
Posted by Chandana Kandukur
Bengaluru (Bangalore)
2 - 4 yrs
₹4L - ₹12L / yr
skill iconPython
TensorFlow
pandas
skill iconGit
skill iconFlask
+6 more

 Job Description: AI/ML Engineer

 

Location: Bangalore (On-site)  

Experience: 2+ years of relevant experience

 

About the Role:

We are seeking a skilled and passionate AI/ML Engineer to join our team in Bangalore. The ideal candidate will have over two years of experience in developing, deploying, and maintaining AI and machine learning models. As an AI/ML Engineer, you will work closely with our data science team to build innovative solutions and deploy them in a production environmen

 

 Key Responsibilities:

- Develop, implement, and optimize machine learning models.

- Perform data manipulation, exploration, and analysis to derive actionable insights.

- Use advanced computer vision techniques, including YOLO and other state-of-the-art methods, for image processing and analysis.

- Collaborate with software developers and data scientists to integrate AI/ML solutions into the company's applications and products.

- Design, test, and deploy scalable machine learning solutions using TensorFlow, OpenCV, and other related technologies.

- Ensure the efficient storage and retrieval of data using SQL and data manipulation libraries such as pandas and NumPy.

- Contribute to the development of backend services using Flask or Django for deploying AI models.

- Manage code using Git and containerize applications using Docker when necessary.

- Stay updated with the latest advancements in AI/ML and integrate them into existing projects.

 

Required Skills:

- Proficiency in Python and its associated libraries (NumPy, pandas).

- Hands-on experience with TensorFlow for building and training machine learning models.

- Strong knowledge of linear algebra and data augmentation techniques.

- Experience with computer vision libraries like OpenCV and frameworks like YOLO.

- Proficiency in SQL for database management and data extraction.

- Experience with Flask for backend development.

- Familiarity with version control using Git.

 

 Optional Skills:

- Experience with PyTorch, Scikit-learn, and Docker.

- Familiarity with Django for web development.

- Knowledge of GPU programming using CuPy and CUDA.

- Understanding of parallel processing techniques.

 

Qualifications:

- Bachelor's degree in Computer Science, Engineering, or a related field.

- Demonstrated experience in AI/ML, with a portfolio of past projects.

- Strong analytical and problem-solving skills.

- Excellent communication and teamwork skills.

 

 Why Join Us?

- Opportunity to work on cutting-edge AI/ML projects.

- Collaborative and dynamic work environment.

- Competitive salary and benefits.

- Professional growth and development opportunities.

 

If you're excited about using AI/ML to solve real-world problems and have a strong technical background, we'd love to hear from you!

 

Apply now to join our growing team and make a significant impact!

Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
Shivani Kawade
Posted by Shivani Kawade
Pune
4 - 6 yrs
₹15L - ₹25L / yr
PyTorch
skill iconPython
Scikit-Learn
NumPy
pandas
+2 more

Who are we looking for?  


We are looking for a Senior Data Scientist, who will design and develop data-driven solutions using state-of-the-art methods. You should be someone with strong and proven experience in working on data-driven solutions. If you feel you’re enthusiastic about transforming business requirements into insightful data-driven solutions, you are welcome to join our fast-growing team to unlock your best potential.  

 

Job Summary 

  • Supporting company mission by understanding complex business problems through data-driven solutions. 
  • Designing and developing machine learning pipelines in Python and deploying them in AWS/GCP, ... 
  • Developing end-to-end ML production-ready solutions and visualizations. 
  • Analyse large sets of time-series industrial data from various sources, such as production systems, sensors, and databases to draw actionable insights and present them via custom dashboards. 
  • Communicating complex technical concepts and findings to non-technical stakeholders of the projects 
  • Implementing the prototypes using suitable statistical tools and artificial intelligence algorithms. 
  • Preparing high-quality research papers and participating in conferences to present and report experimental results and research findings. 
  • Carrying out research collaborating with internal and external teams and facilitating review of ML systems for innovative ideas to prototype new models. 

 

Qualification and experience 

  • B.Tech/Masters/Ph.D. in computer science, electrical engineering, mathematics, data science, and related fields. 
  • 5+ years of professional experience in the field of machine learning, and data science. 
  • Experience with large-scale Time-series data-based production code development is a plus. 

 

Skills and competencies 

  • Familiarity with Docker, and ML Libraries like PyTorch, sklearn, pandas, SQL, and Git is a must. 
  • Ability to work on multiple projects. Must have strong design and implementation skills. 
  • Ability to conduct research based on complex business problems. 
  • Strong presentation skills and the ability to collaborate in a multi-disciplinary team. 
  • Must have programming experience in Python. 
  • Excellent English communication skills, both written and verbal. 


Benefits and Perks

  • Culture of innovation, creativity, learning, and even failure, we believe in bringing out the best in you. 
  • Progressive leave policy for effective work-life balance. 
  • Get mentored by highly qualified internal resource groups and opportunity to avail industry-driven mentorship program, as we believe in empowering people.  
  • Multicultural peer groups and supportive workplace policies.  
  • Work from beaches, hills, mountains, and many more with the yearly workcation program; we believe in mixing elements of vacation and work. 


 Hiring Process 

  • Call with Talent Acquisition Team: After application screening, a first-level screening with the talent acquisition team to understand the candidate's goals and alignment with the job requirements. 
  • First Round: Technical round 1 to gauge your domain knowledge and functional expertise. 
  • Second Round: In-depth technical round and discussion about the departmental goals, your role, and expectations.
  • Final HR Round: Culture fit round and compensation discussions.
  • Offer: Congratulations you made it!  


If this position sparked your interest, apply now to initiate the screening process.

Read more
golden eagle it technologies pvt ltd
Akansha Kanojia
Posted by Akansha Kanojia
Indore
4 - 5 yrs
₹3L - ₹15L / yr
skill iconPython
skill iconDjango
skill iconFlask
AWS Lambda
NumPy
+2 more

Job Description:-

Designation : Python Developer

Location : Indore | WFO

Skills : Python, Django, Flask, Numpy, Panda,  RESTful APIs, AWS.

Python Developer Responsibilities:- 

1. Coordinating with development teams to determine application requirements.

2. Writing scalable code using Python programming language.

3. Testing and debugging applications.

4. Developing back-end components.

5. Integrating user-facing elements using server-side logic.

6. Assessing and prioritizing client feature requests.

7. Integrating data storage solutions.

8. Coordinating with front-end developers.

9. Reprogramming existing databases to improve functionality.

10. Developing digital tools to monitor online traffic.

Python Developer Requirements:-

1. Bachelor's degree in computer science, computer engineering, or related field.

2. At Least 3+ years of experience as a Python developer.

3. Expert knowledge of Python and related frameworks including Django and Flask.

4. A deep understanding and multi-process architecture and the threading limitations of Python.

5. Familiarity with server-side templating languages including Jinja 2 and Mako.

6. Ability to integrate multiple data sources into a single system.

7. Familiarity with testing tools.

8. Ability to collaborate on projects and work independently when required.

Skills - Python, Django, Flask, Numpy, Panda,  RESTful APIs, AWS.

Read more
P99soft
anu sha
Posted by anu sha
Bengaluru (Bangalore)
6 - 12 yrs
₹10L - ₹15L / yr
skill iconPython
skill iconFlask
pandas
Web Development
azure

Role / Designation : Python Developer

 

Location: Bangalore, India

Skills : Certification: AI900, AZ900 Technical or Key Skills: Primary Skills Python, Flask, Web development. Knowledge on Azure Cloud, Application development, API development


Profile: IT Professional with 6 +years of experience in 

• Hands on experience Python libraries such as Pandas, Numpy , OpenPyxl 

• Hands on experience of Python libraries with multiple document types (excel, csv, pdf and images) 

• Working with huge data sets, data analysis and provide ETL and EDA analysis reports.

 • 5+ years’ experience in any of the programming languages like Python(mandatory), Java and C/C++.

 • Must have experience in Azure Paas, Iaas services like Azure function app, Kubernetes services, storage account, keyvault , etc 

• Experience with DB such as SQL,NoSQL 

• Develop methodologies for the data analysis, data extraction, data transformations, preprocessing of data. 

• Experience in deploying applications, packages in Azure environment. 

• Writing scalable code using Python programming language.

 • Testing and debugging applications.

 • Developing back-end components.

 • Integrating user-facing elements using server-side logic. 

• Excellent problem solving/analytical skills and complex troubleshooting methods.

 • Ability to work through ambiguous situations.

 • Excellent presentation, verbal, and written communication skills. Education: BE/BTech/BSc 


Certification: AI900, AZ900 Technical or Key Skills: Primary Skills Python, Flask, Web development. Knowledge on Azure Cloud, Application development, API development



Read more
Wheelseye Technology

at Wheelseye Technology

5 recruiters
Mohit Sharma
Posted by Mohit Sharma
Gurugram
2 - 5 yrs
₹5L - ₹15L / yr
SQL
pandas
skill iconPython
A/B Testing

REQUIREMENTS

Core skills:

Technical Experience (Must have) - working knowledge of any visualization tool

(Metabase,Tableau, QlikSense, Looker, Superset, Power BI etc), strong SQL & Python,

Excel/Gsheet

Product Knowledge (Must have)- Knowledge of Google Analytics/ BigQuery or

Mixpanel, must have worked on A/B testing & events writing.Must be familiar with

product (app,website) data and have good product sense

Analytical Thinking: Outstanding analytical and problem-solving skills. ability to break

the problem statement during execution.


Core Experience:

● Overall experience of 2-5 years in the analytics domain

● He/she should have hands-on experience in the analytics domain around making Data

Story Dashboards, doing RCA & analyzing data.

● Understand and hands-on experience of the Product i.e funnels, A/B experiment etc.

● Ability to define the right metric for a specific product feature or experiment & do the

impact analysis.

● Ability to explain complex data insights to a wider audience & tell us the next steps &

recommendations

● Experience in analyzing, exploring, and mining large data sets to support reporting and

ad-hoc analysis

● Strong attention to detail and accuracy of output.

Read more
Mitibase
Vaidehi Ghangurde
Posted by Vaidehi Ghangurde
Pune
2 - 4 yrs
₹6L - ₹8L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconReact.js
skill iconAngular (2+)
skill iconJavascript
+6 more

·      The Objective:

You will play a crucial role in designing, implementing, and maintaining our data infrastructure, run tests and update the systems


·      Job function and requirements

 

o  Expert in Python, Pandas and Numpy with knowledge of Python web Framework such as Django and Flask.

o  Able to integrate multiple data sources and databases into one system.

o  Basic understanding of frontend technologies like HTML, CSS, JavaScript.

o  Able to build data pipelines.

o  Strong unit test and debugging skills.

o  Understanding of fundamental design principles behind a scalable application

o  Good understanding of RDBMS databases among Mysql or Postgresql.

o  Able to analyze and transform raw data.

 

·      About us

Mitibase helps companies find warm prospects every month that are most relevant, and then helps their team to act on those with automation. We do so by automatically tracking key accounts and contacts for job changes and relationships triggers and surfaces them as warm leads in your sales pipeline.

Read more
ZeMoSo Technologies

at ZeMoSo Technologies

11 recruiters
HR Team
Posted by HR Team
Remote only
3 - 6 yrs
Best in industry
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+5 more

Job Description: 

Machine Learning / AI Engineer (with 3+ years of experience)


We are seeking a highly skilled and passionate Machine Learning / AI Engineer to join our newly established data science practice area. In this role, you will primarily focus on working with Large Language Models (LLMs) and contribute to building generative AI applications. This position offers an exciting opportunity to shape the future of AI technology while charting an interesting career path within our organization.


Responsibilities:


1. Develop and implement machine learning models: Utilize your expertise in machine learning and artificial intelligence to design, develop, and deploy cutting-edge models, with a particular emphasis on Large Language Models (LLMs). Apply your knowledge to solve complex problems and optimize performance.


2. Building generative AI applications: Collaborate with cross-functional teams to conceptualize, design, and build innovative generative AI applications. Work on projects that push the boundaries of AI technology and deliver impactful solutions to real-world problems.


3. Data preprocessing and analysis: Collect, clean, and preprocess large volumes of data for training and evaluation purposes. Conduct exploratory data analysis to gain insights and identify patterns that can enhance the performance of AI models.


4. Model training and evaluation: Develop robust training pipelines for machine learning models, incorporating best practices in model selection, feature engineering, and hyperparameter tuning. Evaluate model performance using appropriate metrics and iterate on the models to improve accuracy and efficiency.


5. Research and stay up to date: Keep abreast of the latest advancements in machine learning, natural language processing, and generative AI. Stay informed about industry trends, emerging techniques, and open-source libraries, and apply relevant findings to enhance the team's capabilities.


6. Collaborate and communicate effectively: Work closely with a multidisciplinary team of data scientists, software engineers, and domain experts to drive AI initiatives. Clearly communicate complex technical concepts and findings to both technical and non-technical stakeholders.


7. Experimentation and prototyping: Explore novel ideas, experiment with new algorithms, and prototype innovative solutions. Foster a culture of innovation and contribute to the continuous improvement of AI methodologies and practices within the organization.


Requirements:


1. Education: Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Relevant certifications in machine learning, deep learning, or AI are a plus.


2. Experience: A minimum of 3+ years of professional experience as a Machine Learning / AI Engineer, with a proven track record of developing and deploying machine learning models in real-world applications.


3. Strong programming skills: Proficiency in Python and experience with machine learning frameworks (e.g., TensorFlow, PyTorch) and libraries (e.g., scikit-learn, pandas). Experience with cloud platforms (e.g., AWS, Azure, GCP) for model deployment is preferred.


4. Deep-learning expertise: Strong understanding of deep learning architectures (e.g., convolutional neural networks, recurrent neural networks, transformers) and familiarity with Large Language Models (LLMs) such as GPT-3, GPT-4, or equivalent.


5. Natural Language Processing (NLP) knowledge: Familiarity with NLP techniques, including tokenization, word embeddings, named entity recognition, sentiment analysis, text classification, and language generation.


6. Data manipulation and preprocessing skills: Proficiency in data manipulation using SQL and experience with data preprocessing techniques (e.g., cleaning, normalization, feature engineering). Familiarity with big data tools (e.g., Spark) is a plus.


7. Problem-solving and analytical thinking: Strong analytical and problem-solving abilities, with a keen eye for detail. Demonstrated experience in translating complex business requirements into practical machine learning solutions.


8. Communication and collaboration: Excellent verbal and written communication skills, with the ability to explain complex technical concepts to diverse stakeholders


Read more
[x]cube LABS

at [x]cube LABS

2 candid answers
1 video
Krishna kandregula
Posted by Krishna kandregula
Hyderabad
2 - 6 yrs
₹8L - ₹20L / yr
ETL
Informatica
Data Warehouse (DWH)
PowerBI
DAX
+12 more
  • Creating and managing ETL/ELT pipelines based on requirements
  • Build PowerBI dashboards and manage datasets needed.
  • Work with stakeholders to identify data structures needed for future and perform any transformations including aggregations.
  • Build data cubes for real-time visualisation needs and CXO dashboards.


Required Tech Skills


  • Microsoft PowerBI & DAX
  • Python, Pandas, PyArrow, Jupyter Noteboks, ApacheSpark
  • Azure Synapse, Azure DataBricks, Azure HDInsight, Azure Data Factory



Read more
An 8 year old IT Services and consulting company.

An 8 year old IT Services and consulting company.

Agency job
via Startup Login by Shreya Sanchita
Remote, Hyderabad, Bengaluru (Bangalore)
8 - 15 yrs
₹20L - ₹55L / yr
skill iconPython
skill iconDjango
skill iconFlask
skill iconData Analytics
skill iconData Science
+11 more

CTC Budget: 35-55LPA

Location: Hyderabad (Remote after 3 months WFO)


Company Overview:


An 8-year-old IT Services and consulting company based in Hyderabad providing services in maximizing product value while delivering rapid incremental innovation, possessing extensive SaaS company M&A experience including 20+ closed transactions on both the buy and sell sides. They have over 100 employees and looking to grow the team.


  • 6 plus years of experience as a Python developer.
  • Experience in web development using Python and Django Framework.
  • Experience in Data Analysis and Data Science using Pandas, Numpy and Scifi-Kit - (GTH)
  • Experience in developing User Interface using HTML, JavaScript, CSS.
  • Experience in server-side templating languages including Jinja 2 and Mako
  • Knowledge into Kafka and RabitMQ (GTH)
  • Experience into Docker, Git and AWS
  • Ability to integrate multiple data sources into a single system.
  • Ability to collaborate on projects and work independently when required.
  • DB (MySQL, Postgress, SQL)


Selection Process: 2-3 Interview rounds (Tech, VP, Client)

Read more
Cambridge Technology

at Cambridge Technology

2 recruiters
Muthyala Shirish Kumar
Posted by Muthyala Shirish Kumar
Hyderabad
2 - 15 yrs
₹10L - ₹40L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+7 more

From building entire infrastructures or platforms to solving complex IT challenges, Cambridge Technology helps businesses accelerate their digital transformation and become AI-first businesses. With over 20 years of expertise as a technology services company, we enable our customers to stay ahead of the curve by helping them figure out the perfect approach, solutions, and ecosystem for their business. Our experts help customers leverage the right AI, big data, cloud solutions, and intelligent platforms that will help them become and stay relevant in a rapidly changing world.


No Of Positions: 1


Skills required: 

  • The ideal candidate will have a bachelor’s degree in data science, statistics, or a related discipline with 4-6 years of experience, or a master’s degree with 4-6 years of experience. A strong candidate will also possess many of the following characteristics:
  • Strong problem-solving skills with an emphasis on achieving proof-of-concept
  • Knowledge of statistical techniques and concepts (regression, statistical tests, etc.)
  • Knowledge of machine learning and deep learning fundamentals
  • Experience with Python implementations to build ML and deep learning algorithms (e.g., pandas, numpy, sci-kit-learn, Stats Models, Keras, PyTorch, etc.)
  • Experience writing and debugging code in an IDE
  • Experience using managed web services (e.g., AWS, GCP, etc.)
  • Strong analytical and communication skills
  • Curiosity, flexibility, creativity, and a strong tolerance for ambiguity
  • Ability to learn new tools from documentation and internet resources.

Roles and responsibilities :

  • You will work on a small, core team alongside other engineers and business leaders throughout Cambridge with the following responsibilities:
  • Collaborate with client-facing teams to design and build operational AI solutions for client engagements.
  • Identify relevant data sources for data wrangling and EDA
  • Identify model architectures to use for client business needs.
  • Build full-stack data science solutions up to MVP that can be deployed into existing client business processes or scaled up based on clear documentation.
  • Present findings to teammates and key stakeholders in a clear and repeatable manner.

Experience :

2 - 14 Yrs

Read more
JRD Systems

at JRD Systems

1 recruiter
Lavanya B
Posted by Lavanya B
Bengaluru (Bangalore)
4 - 8 yrs
₹4L - ₹8L / yr
skill iconPython
skill iconDjango
skill iconFlask
Object Oriented Programming (OOPs)
skill iconAmazon Web Services (AWS)
+9 more

·      4+ years of experience as a Python Developer.

·      Good Understanding of Object-Oriented Concepts and Solid principles.

·      Good Understanding in Programming and analytical skills.

·      Should have hands on experience in AWS Cloud Service like S3, Lambda functions Knowledge. (Must Have)

·      Should have experience Working with large datasets (Must Have)

·      Proficient in using NumPy, Pandas. (Must Have)

·      Should have hands on experience on Mysql (Must Have)

·      Should have experience in debugging Python applications (Must have)

·      Knowledge of working on Flask.

·      Knowledge of object-relational mapping (ORM).

·      Able to integrate multiple data sources and databases into one system

·      Proficient understanding of code versioning tools such as Git, SVN

·      Strong at problem-solving and logical abilities

·      Sound knowledge of Front-end technologies like HTML5, CSS3, and JavaScript 

·      Strong commitment and desire to learn and grow.

Read more
Polybyte Technologies
Ahmedabad
1 - 3 yrs
₹4L - ₹10L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconJavascript
+6 more

We are seeking a skilled and motivated Python Full Stack Developer to join us. The ideal candidate will have experience with Python, JavaScript and its related technologies, as well as a passion for developing efficient and scalable software solutions.


Responsibilities:


  • Design and develop high-quality, scalable applications using Python, Django, DRF, FastAPI and JavaScript frameworks such as React or Vue.js
  • Analyze business requirements and develop software solutions to meet those needs
  • Write clean, maintainable, and efficient code
  • Test software solutions to ensure they meet performance, scalability, and reliability requirements
  • Debug and troubleshoot issues in the software
  • Stay up-to-date with emerging trends and technologies in Python development


Qualifications:


  • Bachelor's or Master's degree in Computer Science or related field
  • At least 2 years of experience in developing applications using Python, Django, DRF or FastAPI.
  • At least 2 years of experience in using front-end JavaScript frameworks such as Jquery, React or Vue.js
  • Experience with database technologies such as PostgreSQL and MongoDB
  • Experience with AWS or other cloud platforms
  • Ability to write clean and maintainable code
  • Strong analytical and problem-solving skills
  • Excellent written and verbal communication skills


Nice to have:


  • Knowledge of Trading in stocks, forex, futures etc.
  • Knowledge of automated trading
  • Experience with Different Trading Platforms


We offer:


  • Competitive salary
  • Flexible working hours


Job Types: Full-time, Regular / Permanent


Salary: ₹400,000.00 - ₹10,000.00 per year


Benefits:

  • Flexible schedule

Schedule:

  • Day shift
  • Monday to Friday

Supplemental pay types:

  • Overtime pay
  • Yearly bonus
  • Performance-based bonus

Ability to commute/relocate:

  • Mondeal heights, SG Highway, Ahmedabad - 380015, Gujarat: Reliably commute or planning to relocate before starting work (Required)

Education:

  • Bachelor's (Preferred)

Experience:

  • Python: 1-3 years (Required)
  • JavaScript: 1-3 years (Required)
Read more
[x]cube LABS

at [x]cube LABS

2 candid answers
1 video
Samudrala SaiAnvesh
Posted by Samudrala SaiAnvesh
Hyderabad
3 - 5 yrs
₹NaNL / yr
skill iconPython
skill iconDjango
skill iconFlask
PyData
pandas
+3 more

Job Description:


  • 3 - 4 years of hands-on Python programming & libraries like PyData, Pandas
  • Exposure to Mongo DB
  • Experience in writing Unit Test cases
  • Expertise in writing medium/advanced SQL Database queries
  • Strong Verbal/Written communication skills
  • Ability to work with onsite counterpart teams


Read more
Cubera Tech India Pvt Ltd
Surabhi Koushik
Posted by Surabhi Koushik
Bengaluru (Bangalore)
2 - 3 yrs
₹24L - ₹35L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
SQL
+6 more

Data Scientist

 

Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.

 

What you’ll do?

 

  • Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models.
  • Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation, and serving.
  • Research new and innovative machine learning approaches.
  • Perform hands-on analysis and modeling of enormous data sets to develop insights that increase Ad Traffic and Campaign Efficacy.
  • Collaborate with other data scientists, data engineers, product managers, and business stakeholders to build well-crafted, pragmatic data products.  
  • Actively take on new projects and constantly try to improve the existing models and infrastructure necessary for offline and online experimentation and iteration.
  • Work with your team on ambiguous problem areas in existing or new ML initiatives

What are we looking for?

  • Ability to write a SQL query to pull the data you need.
  • Fluency in Python and familiarity with its scientific stack such as numpy, pandas, scikit learn, matplotlib.
  • Experience in Tensorflow and/or R Modelling and/or PyTorch
  • Ability to understand a business problem and translate, and structure it into a data science problem. 

 

Job Category: Data Science

Job Type: Full Time

Job Location: Bangalore

 

Read more
Chennai, Bengaluru (Bangalore), Hyderabad
5 - 8 yrs
₹10L - ₹25L / yr
skill iconPython
Robot Framework
Selenium
pandas

Experience : 5-8 years

 

Location : Bangalore,Chennai and hyderabad

 

 

 

Python Developer (1 Position)

Must have skills: 

·        Experience in advanced Python

·        Experience in GUI/Test Automation tools/libraries (Robot Framework, Selenium, & Sikuli etc.)

·        Ability to create UI Automation scripts to execute in Remote/Citrix servers

·        Knowledge in analytical libraries like Pandas, Numpy, Scipy, PyTorch etc.

·        AWS skillset

 

Nice to have skills:

·        Experience in SQL and Big Data analytic tools like Hive and Hue

·        Experience in Machine learning

·        Experience in Linux administration

Read more
Indium Software

at Indium Software

16 recruiters
Swaathipriya P
Posted by Swaathipriya P
Bengaluru (Bangalore), Hyderabad
2 - 5 yrs
₹1L - ₹15L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+6 more
2+ years of Analytics with predominant experience in SQL, SAS, Statistics, R , Python, Visualization
Experienced in writing complex SQL select queries (window functions & CTE’s) with advanced SQL experience
Should be an individual contributor for initial few months based on project movement team will be aligned
Strong in querying logic and data interpretation
Solid communication and articulation skills
Able to handle stakeholders independently with less interventions of reporting manager
Develop strategies to solve problems in logical yet creative ways
Create custom reports and presentations accompanied by strong data visualization and storytelling
Read more
One of our Premium Client

One of our Premium Client

Agency job
Chennai
3 - 8 yrs
₹3L - ₹17L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
skill iconDeep Learning
+7 more

Job Description – Data Science  

 

Basic Qualification:

  • ME/MS from premier institute with a background in Mechanical/Industrial/Chemical/Materials engineering.
  • Strong Analytical skills and application of Statistical techniques to problem solving
  • Expertise in algorithms, data structures and performance optimization techniques
  • Proven track record of demonstrating end to end ownership involving taking an idea from incubator to market
  •   Minimum years of experience in data analysis (2+), statistical analysis, data mining, algorithms for optimization.

Responsibilities

The Data Engineer/Analyst will

  • Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions.
  • Clear interaction with Business teams including product planning, sales, marketing, finance for defining the projects, objectives.
  • Mine and analyze data from company databases to drive optimization and improvement of product and process development, marketing techniques and business strategies
  • Coordinate with different R&D and Business teams to implement models and monitor outcomes.
  • Mentor team members towards developing quick solutions for business impact.
  • Skilled at all stages of the analysis process including defining key business questions, recommending measures, data sources, methodology and study design, dataset creation, analysis execution, interpretation and presentation and publication of results.

 

Preferred Qualifications
  • 4+ years’ experience in MNC environment with projects involving ML, DL and/or DS
  • Experience in Machine Learning, Data Mining or Machine Intelligence (Artificial Intelligence)
  • Knowledge on Microsoft Azure will be desired.
  • Expertise in machine learning such as Classification, Data/Text Mining, NLP, Image Processing, Decision Trees, Random Forest, Neural Networks, Deep Learning Algorithms
  • Proficient in Python and its various libraries such as Numpy, MatPlotLib, Pandas
  • Superior verbal and written communication skills, ability to convey rigorous mathematical concepts and considerations to Business Teams.
  • Experience in infra development / building platforms is highly desired.
  • A drive to learn and master new technologies and techniques.
Read more
Novo

at Novo

2 recruiters
Dishaa Ranjan
Posted by Dishaa Ranjan
Bengaluru (Bangalore), Gurugram
4 - 6 yrs
₹25L - ₹35L / yr
SQL
skill iconPython
pandas
Scikit-Learn
TensorFlow
+1 more

About Us: 

Small businesses are the backbone of the US economy, comprising almost half of the GDP and the private workforce. Yet, big banks don’t provide the access, assistance and modern tools that owners need to successfully grow their business. 


We started Novo to challenge the status quo—we’re on a mission to increase the GDP of the modern entrepreneur by creating the go-to banking platform for small businesses (SMBs). Novo is flipping the script of the banking world, and we’re excited to lead the small business banking revolution.


At Novo, we’re here to help entrepreneurs, freelancers, startups and SMBs achieve their financial goals by empowering them with an operating system that makes business banking as easy as iOS. We developed modern bank accounts and tools to help to save time and increase cash flow. Our unique product integrations enable easy access to tracking payments, transferring money internationally, managing business transactions and more. We’ve made a big impact in a short amount of time, helping thousands of organizations access powerfully simple business banking.  



We are looking for a Senior Data Scientist who is enthusiastic about using data and technology to solve complex business problems. If you're passionate about leading and helping to architect and develop thoughtful data solutions, then we want to chat. Are you ready to revolutionize the small business banking industry with us?


About the Role: (specific to the role-- describe the role activities/duties, who they interact with, what they are accountable for, how the role operates in the team, department and organization)


  • Build and manage predictive models focussed on credit risk, fraud, conversions, churn, consumer behaviour etc
  • Provides best practices, direction for data analytics and business decision making across multiple projects and functional areas
  • Implements performance optimizations and best practices for scalable data models, pipelines and modelling
  • Resolve blockers and help the team stay productive
  • Take part in building the team and iterating on hiring processes

Requirements for the Role: (these are specific to the role-- technical skills and requirements to fulfill the job duties, certifications, years of experience, degree)


  • 4+ years of experience in data science roles focussed on managing data processes, modelling and dashboarding
  • Strong experience in python, SQL and in-depth understanding of modelling techniques
  • Experience working with Pandas, scikit learn, visualization libraries like plotly, bokeh etc.
  • Prior experience with credit risk modelling will be preferred
  • Deep Knowledge of Python to write scripts to manipulate data and generate automated  reports

How We Define Success: (these are specific to the role-- should be tied to performance management, OKRs or general goals)


  • Expand access to data driven decision making across the organization
  • Solve problems in risk, marketing, growth, customer behaviour through analytics models that increase efficacy

Nice To Have, but Not Required:

  • Experience in dashboarding libraries like Python Dash and exposure to CI/CD 
  • Exposure to big data tools like Spark, and some core tech knowledge around API’s, data streaming etc.


Novo values diversity as a core tenant of the work we do and the businesses we serve. We are an equal opportunity employer, indiscriminate of race, religion, ethnicity, national origin, citizenship, gender, gender identity, sexual orientation, age, veteran status, disability, genetic information or any other protected characteristic. 

Read more
Nexsys

at Nexsys

2 recruiters
Kiran Basavaraj  Nirakari
Posted by Kiran Basavaraj Nirakari
Bengaluru (Bangalore)
2 - 5 yrs
₹10L - ₹15L / yr
NumPy
pandas
skill iconMongoDB
SQL
NOSQL Databases
+2 more

What we look for: 

We are looking for an associate who will be doing data crunching from various sources and finding the key points from the data. Also help us to improve/build new pipelines as per the requests. Also, this associate will be helping us to visualize the data if required and find flaws in our existing algorithms. 

Responsibilities: 

  • Work with multiple stakeholders to gather the requirements of data or analysis and take action on them. 
  • Write new data pipelines and maintain the existing pipelines. 
  • Person will be gathering data from various DB’s and will be finding the required metrics out of it. 

Required Skills: 

  • Experience with python and Libraries like Pandas,and Numpy. 
  • Experience in SQL and understanding of NoSQL DB’s. 
  • Hands-on experience in Data engineering. 
  • Must have good analytical skills and knowledge of statistics. 
  • Understanding of Data Science concepts. 
  • Bachelor degree in Computer Science or related field. 
  • Problem-solving skills and ability to work under pressure. 

Nice to have: 

  • Experience in MongoDB or any NoSql DB. 
  • Experience in ElasticSearch. 
  • Knowledge of Tableau, Power BI or any other visualization tool.
Read more
RedSeer Consulting

at RedSeer Consulting

2 recruiters
Raunak Swarnkar
Posted by Raunak Swarnkar
Bengaluru (Bangalore)
0 - 2 yrs
₹10L - ₹15L / yr
skill iconPython
PySpark
SQL
pandas
Cloud Computing
+2 more

BRIEF DESCRIPTION:

At-least 1 year of Python, Spark, SQL, data engineering experience

Primary Skillset: PySpark, Scala/Python/Spark, Azure Synapse, S3, RedShift/Snowflake

Relevant Experience: Legacy ETL job Migration to AWS Glue / Python & Spark combination

 

ROLE SCOPE:

Reverse engineer the existing/legacy ETL jobs

Create the workflow diagrams and review the logic diagrams with Tech Leads

Write equivalent logic in Python & Spark

Unit test the Glue jobs and certify the data loads before passing to system testing

Follow the best practices, enable appropriate audit & control mechanism

Analytically skillful, identify the root causes quickly and efficiently debug issues

Take ownership of the deliverables and support the deployments

 

REQUIREMENTS:

Create data pipelines for data integration into Cloud stacks eg. Azure Synapse

Code data processing jobs in Azure Synapse Analytics, Python, and Spark

Experience in dealing with structured, semi-structured, and unstructured data in batch and real-time environments.

Should be able to process .json, .parquet and .avro files

 

PREFERRED BACKGROUND:

Tier1/2 candidates from IIT/NIT/IIITs

However, relevant experience, learning attitude takes precedence

Read more
Contify
Agency job
via Hexagon Executive Search by Gaurang Mahajan
Gurugram, Delhi, Noida, Ghaziabad, Faridabad
6 - 12 yrs
₹30L - ₹50L / yr
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Text mining
pandas
+1 more

Job Description
Lead Machine Learning (ML)/
NLP Engineer
5 + years of experience

About Contify
Contify is an AI-enabled Market and Competitive Intelligence (MCI)
software to help professionals make informed decisions. Its B2B SaaS
platform helps leading organizations such as Ericsson, EY, Wipro,
Deloitte, L&T, BCG, MetLife, etc. track information on their competitors,
customers, industries, and topics of interest by continuously monitoring
over 500,000+ sources on a real-time basis. Contify is rapidly growing
with 185+ people across two offices in India. Contify is the winner of
Frost and Sullivan’s Product Innovation Award for Market and
Competitive Intelligence Platforms.

The role
We are looking for a hardworking, aspirational, and innovative
engineering person for the Lead ML/ NLP Engineer position. You’ll build
Contify’s ML and NLP capabilities and help us extract value from
unstructured data. Using advanced NLP, ML, and text analytics, you will
develop applications that will extract business insights by analyzing a
large amount of unstructured text information, identifying patterns, and
by connecting the events.
Responsibilities:
You will be responsible for all the processes from data collection, and
pre-processing, to training models and deploying them to production.
➔ Understand the business objectives; design and deploy scalable
ML models/ NLP applications to meet those objectives
➔ Use of NLP techniques for text representation, semantic analysis,
information extraction, to meet the business objectives in an
efficient manner along with metrics to measure progress
➔ Extend existing ML libraries and frameworks and use effective text
representations to transform natural language into useful features
➔ Defining and supervising the data collection process, verifying data
quality, and employing data augmentation techniques
➔ Defining the preprocessing or feature engineering to be done on a
given dataset
➔ Analyze the errors of the model and design strategies to overcome
them
➔ Research and implement the right algorithms and tools for ML/
NLP tasks
➔ Collaborate with engineering and product development teams
➔ Represent Contify in external ML industry events and publish
thought leadership articles


Desired Skills and Experience
To succeed in this role, you should possess outstanding skills in
statistical analysis, machine learning methods, and text representation
techniques.
➔ Deep understanding of text representation techniques (such as n-
grams, bag of words, sentiment analysis, etc), statistics and
classification algorithms
➔ Hand on experience in feature extraction techniques for text
classification and topic mining
➔ Knowledge of text analytics with a strong understanding of NLP
algorithms and models (GLMs, SVM, PCA, NB, Clustering, DTs)
and their underlying computational and probabilistic statistics
◆ Word Embedding like Tfidf, Word2Vec, GLove, FastText, etc.
◆ Language models like Bert, GPT,  RoBERTa, XLNet 
◆ Neural networks like RNN, GRU, LSTM, Bi-LSTM
◆ Classification algorithms like LinearSVC, SVM, LR
◆ XGB, MultinomialNB, etc.
◆ Other Algos- PCA, Clustering methods, etc
➔ Excellent knowledge and demonstrable experience in using NLP
packages such as NLTK, Word2Vec, SpaCy, Gensim, Standford
CoreNLP, TensorFlow/ PyTorch.
➔ Experience in setting up supervised & unsupervised learning
models including data cleaning, data analytics, feature creation,
model selection & ensemble methods, performance metrics &
visualization
➔ Evaluation Metrics- Root Mean Squared Error, Confusion Matrix, F
Score, AUC – ROC, etc
➔ Understanding of knowledge graph will be a plus


Qualifications
➔ Education: Bachelors or Masters in Computer Science,
Mathematics, Computational Linguistics or similar field
➔ At least 4 years' experience building Machine Learning & NLP
solutions over open-source platforms such as SciKit-Learn,
Tensorflow, SparkML, etc
➔ At least 2 years' experience in designing and developing
enterprise-scale NLP solutions in one or more of: Named Entity
Recognition, Document Classification, Feature Extraction, Triplet
Extraction, Clustering, Summarization, Topic Modelling, Dialog
Systems, Sentiment Analysis
➔ Self-starter who can see the big picture, and prioritize your work to
make the largest impact on the business’ and customer’s vision
and requirements
➔ Being a committer or a contributor to an open-source project is a
plus

Note
Contify is a people-oriented company. Emotional intelligence, therefore,
is a must. You should enjoy working in a team environment, supporting
your teammates in pursuit of our common goals, and working with your
colleagues to drive customer value. You strive to not only improve
yourself, but also those around you.

Read more
Top Management Consulting Company

Top Management Consulting Company

Agency job
via People First Consultants by Naveed Mohd
Gurugram
5 - 10 yrs
Best in industry
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconRedux/Flux
skill iconDocker
skill iconKubernetes
+4 more
Greetings!!

We are looking out for a technically driven  "Full-Stack Engineer" for one of our premium client

COMPANY DESCRIPTION:
This Company is a global management consulting firm. We are the trusted advisor to the world's leading businesses, governments, and institutions. We work with leading organizations across the private, public and social sectors. Our scale, scope, and knowledge allow us to address

Required Skills
• Hands-on experience with NodeJS, React, Redux, & Docker
• Great to have understanding about Kubernetes, Postgres and AWS (SQS, Lambda, S3)
• Experience implementing micro service technology
• Experience working with Python and Pandas, used for data manipulation is a plus
• Experience with Power BI and its APIs is a plus
• Experience with building and maintaining large data sets
• Ability to work across structured, semi-structured and unstructured data, extracting information
and identifying linkages across disparate data sets
• Understanding of information security principles
• Ability to understand complex systems and solve challenging problems
• Ability to clearly communicate complex solutions
• Ability to learn new technologies quickly
• Comfortable in a fast paced small team environment
• Open to work with global team structure, flexible and efficient
• Ability and flexibility to manage multiple assignments in a dynamic, complex and fast-paced
environment
• High level of attention to detail
• Commercial client-facing project experience is a plus
• Business-level language skills and fluency in English
Read more
An US based firm offering permanent WFH

An US based firm offering permanent WFH

Agency job
via Jobdost by Mamatha A
Remote only
3 - 6 yrs
₹12L - ₹23L / yr
skill iconDeep Learning
Computer Vision
PyTorch
TensorFlow
skill iconPython
+7 more
This person MUST have:
- B.E Computer Science or equivalent.
- In-depth knowledge of machine learning algorithms and their applications including
practical experience with and theoretical understanding of algorithms for classification,
regression and clustering.
- Hands-on experience in computer vision and deep learning projects to solve real world
problems involving vision tasks such as object detection, Object tracking, instance
segmentation, activity detection, depth estimation, optical flow, multi-view geometry,
domain adaptation etc.
- Strong understanding of modern and traditional Computer Vision Algorithms.
- Experience in one of the Deep Learning Frameworks / Networks: PyTorch, TensorFlow,
Darknet (YOLO v4 v5), U-Net, Mask R-CNN, EfficientDet, BERT etc.
- Proficiency with CNN architectures such as ResNet, VGG, UNet, MobileNet, pix2pix,
and Cycle GAN.
- Experienced user of libraries such as OpenCV, scikit-learn, matplotlib and pandas.
- Ability to transform research articles into working solutions to solve real-world problems.
- High proficiency in Python programming knowledge.
- Familiar with software development practices/pipelines (DevOps- Kubernetes, docker
containers, CI/CD tools).
- Strong communication skills.
Read more
SteelEye

at SteelEye

1 video
3 recruiters
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
1 - 8 yrs
₹10L - ₹40L / yr
skill iconPython
ETL
skill iconJenkins
CI/CD
pandas
+6 more
Roles & Responsibilties
Expectations of the role
This role will be reporting into Technical Lead (Support). You will be expected to resolve bugs in the platform that are identified by Customers and Internal Teams. This role will progress towards SDE-2 in 12-15 months where the developer will be working on solving complex problems around scale and building out new features.
 
What will you do?
  • Fix issues with plugins for our Python-based ETL pipelines
  • Help with automation of standard workflow
  • Deliver Python microservices for provisioning and managing cloud infrastructure
  • Responsible for any refactoring of code
  • Effectively manage challenges associated with handling large volumes of data working to tight deadlines
  • Manage expectations with internal stakeholders and context-switch in a fast-paced environment
  • Thrive in an environment that uses AWS and Elasticsearch extensively
  • Keep abreast of technology and contribute to the engineering strategy
  • Champion best development practices and provide mentorship to others
What are we looking for?
  • First and foremost you are a Python developer, experienced with the Python Data stack
  • You love and care about data
  • Your code is an artistic manifest reflecting how elegant you are in what you do
  • You feel sparks of joy when a new abstraction or pattern arises from your code
  • You support the manifests DRY (Don’t Repeat Yourself) and KISS (Keep It Short and Simple)
  • You are a continuous learner
  • You have a natural willingness to automate tasks
  • You have critical thinking and an eye for detail
  • Excellent ability and experience of working to tight deadlines
  • Sharp analytical and problem-solving skills
  • Strong sense of ownership and accountability for your work and delivery
  • Excellent written and oral communication skills
  • Mature collaboration and mentoring abilities
  • We are keen to know your digital footprint (community talks, blog posts, certifications, courses you have participated in or you are keen to, your personal projects as well as any kind of contributions to the open-source communities if any)
Nice to have:
  • Delivering complex software, ideally in a FinTech setting
  • Experience with CI/CD tools such as Jenkins, CircleCI
  • Experience with code versioning (git / mercurial / subversion)
Read more
A Reputed Analytics Consulting Company in Data Science field

A Reputed Analytics Consulting Company in Data Science field

Agency job
via 2COMS by Rafikhunnisa Shaik
Chennai, Bengaluru (Bangalore), Hyderabad
2 - 5 yrs
₹4L - ₹13L / yr
skill iconMachine Learning (ML)
skill iconData Science
skill iconPython
NumPy
pandas
+3 more

 

Job Title : Analyst / Sr. Analyst – Data Science Developer - Python

Exp : 2 to 5 yrs

Loc : B’lore / Hyd / Chennai

NP: Candidate should join us in 2 months (Max) / Immediate Joiners Pref.

 

About the role:

 

We are looking for an Analyst / Senior Analyst who works in the analytics domain with a strong python background.

 

Desired Skills, Competencies & Experience:

 

•                     • 2-4 years of experience in working in the analytics domain with a strong python background.

•                     • Visualization skills in python with plotly, matplotlib, seaborn etc. Ability to create customized plots using such tools.

•                     • Ability to write effective, scalable and modular code. Should be able to understand, test and debug existing python project modules quickly and contribute to that.

•                     • Should be familiarized with Git workflows.

 

Good to Have:

•                     • Familiarity with cloud platforms like AWS, AzureML, Databricks, GCP etc.

•                     • Understanding of shell scripting, python package development.

•                     • Experienced with Python data science packages like Pandas, numpy, sklearn etc.

•                     • ML model building and evaluation experience using sklearn.

 

Read more
SteelEye

at SteelEye

1 video
3 recruiters
akanksha rajput
Posted by akanksha rajput
Bengaluru (Bangalore)
4 - 8 yrs
₹20L - ₹30L / yr
ETL
Informatica
Data Warehouse (DWH)
skill iconPython
pandas

About us

SteelEye is the only regulatory compliance technology and data analytics firm that offers transaction reporting, record keeping, trade reconstruction, best execution and data insight in one comprehensive solution. The firm’s scalable secure data storage platform offers encryption at rest and in flight and best-in-class analytics to help financial firms meet regulatory obligations and gain competitive advantage.

The company has a highly experienced management team and a strong board, who have decades of technology and management experience and worked in senior positions at many leading international financial businesses. We are a young company that shares a commitment to learning, being smart, working hard and being honest in all we do and striving to do that better each day. We value all our colleagues equally and everyone should feel able to speak up, propose an idea, point out a mistake and feel safe, happy and be themselves at work.

Being part of a start-up can be equally exciting as it is challenging. You will be part of the SteelEye team not just because of your talent but also because of your entrepreneurial flare which we thrive on at SteelEye. This means we want you to be curious, contribute, ask questions and share ideas. We encourage you to get involved in helping shape our business. What you'll do

What you will do?

  • Deliver plugins for our python based ETL pipelines.
  • Deliver python services for provisioning and managing cloud infrastructure.
  • Design, Develop, Unit Test, and Support code in production.
  • Deal with challenges associated with large volumes of data.
  • Manage expectations with internal stakeholders and context switch between multiple deliverables as priorities change.
  • Thrive in an environment that uses AWS and Elasticsearch extensively.
  • Keep abreast of technology and contribute to the evolution of the product.
  • Champion best practices and provide mentorship.

What we're looking for

  • Python 3.
  • Python libraries used for data (such as pandas, numpy).
  • AWS.
  • Elasticsearch.
  • Performance tuning.
  • Object Oriented Design and Modelling.
  • Delivering complex software, ideally in a FinTech setting.
  • CI/CD tools.
  • Knowledge of design patterns.
  • Sharp analytical and problem-solving skills.
  • Strong sense of ownership.
  • Demonstrable desire to learn and grow.
  • Excellent written and oral communication skills.
  • Mature collaboration and mentoring abilities.

What will you get?

  • This is an individual contributor role. So, if you are someone who loves to code and solve complex problems and build amazing products and not worry about anything else, this is the role for you.
  • You will have the chance to learn from the best in the business who have worked across the world and are technology geeks.
  • Company that always appreciates ownership and initiative. If you are someone who is full of ideas, this role is for you.
Read more
Client of People First Consultants

Client of People First Consultants

Agency job
via People First Consultants by Aishwarya KA
Pune, Hyderabad
3 - 6 yrs
₹4L - ₹8L / yr
skill iconPython
NumPy
pandas
skill iconDjango
skill iconFlask
+2 more

Key skills : Python, Numpy, Panda, SQL, ETL 

Roles and Responsibilities:

 

- The work will involve the development of workflows triggered by events from other systems

- Design, develop, test, and deliver software solutions in the FX Derivatives group

- Analyse requirements for the solutions they deliver, to ensure that they provide the right solution

- Develop easy to use documentation for the frameworks and tools developed for adaption by other teams

Familiarity with event-driven programming in Python

- Must have unit testing and debugging skills

- Good problem solving and analytical skills

- Python packages such as NumPy, Scikit learn

- Testing and debugging applications.

- Developing back-end components.

Read more
A Mumbai based startup/WFH

A Mumbai based startup/WFH

Agency job
via Jobdost by Riya Roy
Mumbai
2 - 4 yrs
₹5L - ₹12L / yr
skill iconPython
skill iconDjango
RESTful APIs
skill iconFlask
NumPy
+7 more

Key Skills Required :

  1. Proficiency in Python 3.x based web and backend development
  2. Solid understanding of Python concepts
  3. Strong experience in building web applications using Django
  4. Experience building REST APIs using DRF or Flask
  5. Experience with some form of Machine Learning (ML)
  6. Experience in using libraries such as Numpy and Pandas
  7. Hands on experience with RDBMS such as Postgres or MySQL including querying
  8. Comfort with Git repositories, branching and deployment using Git
  9. Working experience with Docker
  10. Basic working knowledge of ReactJs
  11. Experience in deploying Django applications to AWS,Digital Ocean or Heroku

 

Responsibilities :

  1. Understanding requirement and congributing to engineering solutions at a conceptual stage to provide the best possible solution to the task/challenge
  2. Building high quality code using coding standards based on the SRS/Documentation
  3. Building component based, maintainable, scalable and reusable backend libraries/modules.
  4. Building & documenting scalable APIs on the Open Spec standard
  5. Unit testing development modules and APIs
  6. Conducting code reviews to ensure that the highest quality standard are maintained
  7. Securing backend applications and APIs using industry best practices
  8. Troubleshooting issues and fixing bugs raised by the QA team efficiently.
  9. Optimizing code
  10. Building and deploying the applications
Read more
Blue Sky Analytics

at Blue Sky Analytics

3 recruiters
Balahun Khonglanoh
Posted by Balahun Khonglanoh
Remote only
1 - 5 yrs
Best in industry
NumPy
SciPy
skill iconData Science
skill iconPython
pandas
+8 more

About the Company

Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!


We are looking for a data scientist to join its growing team. This position will require you to think and act on the geospatial architecture and data needs (specifically geospatial data) of the company. This position is strategic and will also require you to collaborate closely with data engineers, data scientists, software developers and even colleagues from other business functions. Come save the planet with us!


Your Role

Manage: It goes without saying that you will be handling large amounts of image and location datasets. You will develop dataframes and automated pipelines of data from multiple sources. You are expected to know how to visualize them and use machine learning algorithms to be able to make predictions. You will be working across teams to get the job done.

Analyze: You will curate and analyze vast amounts of geospatial datasets like satellite imagery, elevation data, meteorological datasets, openstreetmaps, demographic data, socio-econometric data and topography to extract useful insights about the events happening on our planet.

Develop: You will be required to develop processes and tools to monitor and analyze data and its accuracy. You will develop innovative algorithms which will be useful in tracking global environmental problems like depleting water levels, illegal tree logging, and even tracking of oil-spills.

Demonstrate: A familiarity with working in geospatial libraries such as GDAL/Rasterio for reading/writing of data, and use of QGIS in making visualizations. This will also extend to using advanced statistical techniques and applying concepts like regression, properties of distribution, and conduct other statistical tests.

Produce: With all the hard work being put into data creation and management, it has to be used! You will be able to produce maps showing (but not limited to) spatial distribution of various kinds of data, including emission statistics and pollution hotspots. In addition, you will produce reports that contain maps, visualizations and other resources developed over the course of managing these datasets.

Requirements

These are must have skill-sets that we are looking for:

  • Excellent coding skills in Python (including deep familiarity with NumPy, SciPy, pandas).
  • Significant experience with git, GitHub, SQL, AWS (S3 and EC2).
  • Worked on GIS and is familiar with geospatial libraries such as GDAL and rasterio to read/write the data, a GIS software such as QGIS for visualisation and query, and basic machine learning algorithms to make predictions.
  • Demonstrable experience implementing efficient neural network models and deploying them in a production environment.
  • Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications.
  • Capable of writing clear and lucid reports and demystifying data for the rest of us.
  • Be curious and care about the planet!
  • Minimum 2 years of demonstrable industry experience working with large and noisy datasets.

Benefits

  • Work from anywhere: Work by the beach or from the mountains.
  • Open source at heart: We are building a community where you can use, contribute and collaborate on.
  • Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
  • Flexible timings: Fit your work around your lifestyle.
  • Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
  • Work Machine of choice: Buy a device and own it after completing a year at BSA.
  • Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
  • Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.
Read more
Achala IT Solutions

at Achala IT Solutions

1 video
2 recruiters
Srikanth Aripineni
Posted by Srikanth Aripineni
Hyderabad
3 - 12 yrs
₹8L - ₹15L / yr
skill iconPython
skill iconDjango
pandas
AWS Lambda
skill iconHTML/CSS
+4 more

Technical Experience :

  • 2-6 years of Python working experience
  • Expertise in at least one popular Python framework /Django/ Flask
  • Knowledge of object-relational mapping d
  • Familiarity with front-end technologies JavaScript and HTML5


Key Responsibilities :

      • Write effective, scalable code
      • Develop back-end components to improve responsiveness and overall performance
      • Integrate user-facing elements into applications
      • Test and debug programs5 Improve functionality
Read more
Semperfi Solution

at Semperfi Solution

1 recruiter
Ambika Jituri
Posted by Ambika Jituri
Bengaluru (Bangalore)
5 - 9 yrs
₹10L - ₹18L / yr
skill iconPython
skill iconDjango
SQL
TensorFlow
NumPy
+2 more

Job Description

JD - Python Developer 

Responsibilities

  • Design and implement software features based on requirements
  • Architect new features for products or tools
  • Articulate and document designs as needed
  • Prepare and present technical training
  • Provide estimates and status for development tasks
  • Work effectively in a highly collaborative and iterative development process
  • Work effectively with the Product, QA, and DevOps team.
  • Troubleshoot issues and correct defects when required
  • Build unit and integration tests that assure correct behavior and increase the maintainability of the code base
  • Apply dev-ops and automation as needed
  • Commit to continuous learning and enhancement of skills and product knowledge

 

Required Qualifications

  • Minimum of 5 years of relevant experience in development and design
  • Proficiency in Python and extensive knowledge of the associated libraries Extensive experience with Python data science libraries: TensorFlow, NumPy, SciPy, Pandas, etc.
  • Strong skills in producing visuals with algorithm results
  • Strong SQL and working knowledge of Microsoft SQL Server and other data storage technologies
  • Strong web development skills Advance knowledge with ORM and data access patterns
  • Experienced working using Scrum and Agile methodologies
  • Excellent debugging and troubleshooting skills
  • Deep knowledge of DevOps practices and cloud services
  • Strong collaboration and verbal and written communication skills
  • Self-starter, detail-oriented, organized, and thorough
  • Strong interpersonal skills and a team-oriented mindset
  • Fast learner and creative capacity for developing innovative solutions to complex problems

Skills

PYTHON, SQL, TensorFlow, NumPy, SciPy, Pandas

Read more
Yottaasys AI LLC

at Yottaasys AI LLC

5 recruiters
Vinayak sg
Posted by Vinayak sg
Bengaluru (Bangalore)
1 - 5 yrs
₹6L - ₹15L / yr
skill iconData Science
Data Scientist
skill iconPython
Computer Vision
Keras
+7 more
Description
Develop state of the art algorithms in the fields of Computer Vision, Machine Learning and Deep Learning.
Provide software specifications and production code on time to meet project milestones Qualifications
BE or Master with 3+ years of experience
Must have Prior knowledge and experience in Image processing and Video processing • Should have knowledge of object detection and recognition
Must have experience in feature extraction, segmentation and classification of the image
Face detection, alignment, recognition, tracking & attribute recognition
Excellent Understanding and project/job experience in Machine learning, particularly in areas of Deep Learning – CNN, RNN, TENSORFLOW, KERAS etc.
Real world expertise in deep learning- applied to Computer Vision problems • Strong foundation in Mathematics
Strong development skills in Python
Must have worked upon Vision and deep learning libraries and frameworks such as Opencv, Tensorflow, Pytorch, keras
Quick learner of new technologies
Ability to work independently as well as part of a team
Knowledge of working closely with Version Control(GIT)
Read more
App-based lending platform. ( AF1)

App-based lending platform. ( AF1)

Agency job
via Multi Recruit by Ayub Pasha
Bengaluru (Bangalore)
1 - 2 yrs
₹15L - ₹17L / yr
skill iconMachine Learning (ML)
skill iconData Science
Data Scientist
skill iconPython
pandas
+4 more
  • Use data to develop machine learning models that optimize decision making in Credit Risk, Fraud, Marketing, and Operations
  • Implement data pipelines, new features, and algorithms that are critical to our production models
  • Create scalable strategies to deploy and execute your models
  • Write well designed, testable, efficient code
  • Identify valuable data sources and automate collection processes.
  • Undertake to preprocess of structured and unstructured data.
  • Analyze large amounts of information to discover trends and patterns.

Requirements:

  • 1+ years of experience in applied data science or engineering with a focus on machine learning
  • Python expertise with good knowledge of machine learning libraries, tools, techniques, and frameworks (e.g. pandas, sklearn, xgboost, lightgbm, logistic regression, random forest classifier, gradient boosting regressor etc)
  • strong quantitative and programming skills with a product-driven sensibility

 

Read more
A dynamic & experienced technology company

A dynamic & experienced technology company

Agency job
via Jobdost by Ankitha Vyas
Mumbai
1 - 2 yrs
₹4.8L - ₹6.6L / yr
skill iconPython
skill iconDjango
RESTful APIs
skill iconMachine Learning (ML)
NumPy
+3 more
Job Type: Full-time

Positions : 2-3

CTC Offering : 40,000 to 55,000/month

Job Location: Remote for 6-12 months due to the pandemic, then Mumbai, Maharashtra

Required experience:
Minimum 1.5 to 2 years of experience in Web & Backend Development using Python and Django with experience in some form of Machine Learning ML Algorithms

Overview
We are looking for Python developers with a strong understanding of object orientation and experience in web and backend development. Experience with Analytical algorithms and mathematical calculations using libraries such as Numpy and Pandas are a must. Experience in some form of Machine Learning. We require candidates who have working experience using Django Framework and DRF

Key Skills required (Items in Bold are mandatory keywords) :
1. Proficiency in Python 3.x based web and backend development
2. Solid understanding of Python concepts
3. Strong experience in building web applications using Django
4. Experience building REST APIs using DRF or Flask
5. Experience with some form of Machine Learning (ML)
6. Experience in using libraries such as Numpy and Pandas
7. Some form of experience with NLP and Deep Learning using any of Pytorch, Tensorflow, Keras, Scikit-learn or similar
8. Hands on experience with RDBMS such as Postgres or MySQL
9. Comfort with Git repositories, branching and deployment using Git
10. Working experience with Docker
11. Basic working knowledge of ReactJs
12. Experience in deploying Django applications to AWS,Digital Ocean or Heroku

KRAs includes :
1. Understanding the scope of work
2. Understanding and adopting the current internal development workflow and processes
3. Understanding client requirements as communicated by the project manager
4. Arriving on timelines for projects, either independently or as a part of a team
5. Executing projects either independently or as a part of a team
6. Developing products and projects using Python
7. Writing code to collect and mathematically analyse large volumes of data.
8. Creating backend modules in Python by building or reutilizing existing modules in a manner so as to provide optimal deliveries on time
9. Writing Scalable, maintainable code
10. Building secured REST APIs
11. Setting up batch task processing environments using Celery
12. Unit testing prepared modules
13. Bug fixing issues as reported by the QA team
14. Optimization and performance tuning of code

Bonus but not mandatory
1. Nodejs
2. Redis
3. PHP
4. CI/CD
5. AWS
Read more
Hammoq

at Hammoq

1 recruiter
Nikitha Muthuswamy
Posted by Nikitha Muthuswamy
Remote, Indore, Ujjain, Hyderabad, Bengaluru (Bangalore)
5 - 8 yrs
₹5L - ₹15L / yr
pandas
NumPy
Data engineering
Data Engineer
Apache Spark
+6 more
  • Does analytics to extract insights from raw historical data of the organization. 
  • Generates usable training dataset for any/all MV projects with the help of Annotators, if needed.
  • Analyses user trends, and identifies their biggest bottlenecks in Hammoq Workflow.
  • Tests the short/long term impact of productized MV models on those trends.
  • Skills - Numpy, Pandas, SPARK, APACHE SPARK, PYSPARK, ETL mandatory. 
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort