Cutshort logo
OpexAI logo
Data Scientist freshers
Data Scientist freshers
OpexAI's logo

Data Scientist freshers

Jasmine Shaik's profile picture
Posted by Jasmine Shaik
0 - 1 yrs
₹1L - ₹1L / yr
Hyderabad
Skills
Data Science
R Programming
Python
TensorFlow
freshers of Bigdata, Data scientist, Computer vision of their skills
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About OpexAI

Founded :
2017
Type
Size :
20-100
Stage :
Profitable
About

We at opexAI strive to make business strategies and provide effective solutions to your complex business problems using AI, machine learning and cognitive computing approaches. With a 40+ years experience in analytics and a workforce of highly skilled and certified consultants, we provide a realistic approach to help you chart an optimal path to success.

Read more
Connect with the team
Profile picture
Jasmine Shaik
Company social profiles
N/A

Similar jobs

Bengaluru (Bangalore)
6 - 12 yrs
₹25L - ₹35L / yr
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
Deep Learning
+4 more


• 6+ years of data science experience.

• Demonstrated experience in leading programs.

• Prior experience in customer data platforms/finance domain is a plus.

• Demonstrated ability in developing and deploying data-driven products.

• Experience of working with large datasets and developing scalable algorithms.

• Hands-on experience of working with tech, product, and operation teams.


Technical Skills:

• Deep understanding and hands-on experience of Machine learning and Deep

learning algorithms. Good understanding of NLP and LLM concepts and fair

experience in developing NLU and NLG solutions.

• Experience with Keras/TensorFlow/PyTorch deep learning frameworks.

• Proficient in scripting languages (Python/Shell), SQL.

• Good knowledge of Statistics.

• Experience with big data, cloud, and MLOps.

Soft Skills:

• Strong analytical and problem-solving skills.

• Excellent presentation and communication skills.

• Ability to work independently and deal with ambiguity.

Continuous Learning:

• Stay up to date with emerging technologies.


Qualification.


A degree in Computer Science, Statistics, Applied Mathematics, Machine Learning, or any related field / B. Tech.



Read more
Leading Sales Platform
Bengaluru (Bangalore)
5 - 10 yrs
₹30L - ₹45L / yr
Big Data
ETL
Spark
Data engineering
Data governance
+4 more
Work with product managers and development leads to create testing strategies · Develop and scale automated data validation framework · Build and monitor key metrics of data health across the entire Big Data pipelines · Early alerting and escalation process to quickly identify and remedy quality issues before something ever goes ‘live’ in front of the customer · Build/refine tools and processes for quick root cause diagnostics · Contribute to the creation of quality assurance standards, policies, and procedures to influence the DQ mind-set across the company
Required skills and experience: · Solid experience working in Big Data ETL environments with Spark and Java/Scala/Python · Strong experience with AWS cloud technologies (EC2, EMR, S3, Kinesis, etc) · Experience building monitoring/alerting frameworks with tools like Newrelic and escalations with slack/email/dashboard integrations, etc · Executive-level communication, prioritization, and team leadership skills
Read more
Slintel
Agency job
via Qrata by Prajakta Kulkarni
Bengaluru (Bangalore)
4 - 9 yrs
₹20L - ₹28L / yr
Big Data
ETL
Apache Spark
Spark
Data engineer
+5 more
Responsibilities
  • Work in collaboration with the application team and integration team to design, create, and maintain optimal data pipeline architecture and data structures for Data Lake/Data Warehouse.
  • Work with stakeholders including the Sales, Product, and Customer Support teams to assist with data-related technical issues and support their data analytics needs.
  • Assemble large, complex data sets from third-party vendors to meet business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Elasticsearch, MongoDB, and AWS technology.
  • Streamline existing and introduce enhanced reporting and analysis solutions that leverage complex data sources derived from multiple internal systems.

Requirements
  • 5+ years of experience in a Data Engineer role.
  • Proficiency in Linux.
  • Must have SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra, and Athena.
  • Must have experience with Python/Scala.
  • Must have experience with Big Data technologies like Apache Spark.
  • Must have experience with Apache Airflow.
  • Experience with data pipeline and ETL tools like AWS Glue.
  • Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Read more
Ascendeum
at Ascendeum
3 recruiters
Swezelle Esteves
Posted by Swezelle Esteves
Remote only
1 - 5 yrs
₹8L - ₹10L / yr
Python
Data Analytics
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
+4 more

Job Responsibilities: 

 

  • Identify valuable data sources and automate collection processes 
  • Undertake preprocessing of structured and unstructured data. 
  • Analyze large amounts of information to discover trends and patterns 
  • Helping develop reports and analysis. 
  • Present information using data visualization techniques. 
  • Assessing tests and implementing new or upgraded software and assisting with strategic decisions on new systems. 
  • Evaluating changes and updates to source production systems. 
  • Develop, implement, and maintain leading-edge analytic systems, taking complicated problems and building simple frameworks 
  • Providing technical expertise in data storage structures, data mining, and data cleansing. 
  • Propose solutions and strategies to business challenges 

 

Desired Skills and Experience: 

 

  • At least 1 year of experience in Data Analysis 
  • Complete understanding of Operations Research, Data Modelling, ML, and AI concepts. 
  • Knowledge of Python is mandatory, familiarity with MySQL, SQL, Scala, Java or C++ is an asset 
  • Experience using visualization tools (e.g. Jupyter Notebook) and data frameworks (e.g. Hadoop) 
  • Analytical mind and business acumen 
  • Strong math skills (e.g. statistics, algebra) 
  • Problem-solving aptitude 
  • Excellent communication and presentation skills. 
  • Bachelor’s / Master's Degree in Computer Science, Engineering, Data Science or other quantitative or relevant field is preferred  
Read more
Dori AI
at Dori AI
5 recruiters
Nitin Gupta
Posted by Nitin Gupta
Bengaluru (Bangalore)
2 - 8 yrs
₹8L - ₹20L / yr
Python
Data Science
Machine Learning (ML)
Google Cloud Platform (GCP)
Amazon Web Services (AWS)
+5 more

Dori AI enables enterprises with AI-powered video analytics to significantly increase human productivity and improve process compliance. We leverage a proprietary full-stack end-to-end computer vision and deep learning platform to rapidly build and deploy AI solutions for enterprises. The platform was built with enterprise considerations including time-to-value, time-to-market, security, and scalability across a range of use cases. Capture visual data across multiple sites, leverage AI + Computer Vision to gather key insights, and make decisions with actionable visual insights. Launch CV applications in a matter of weeks that are optimized for both cloud and edge deployments.

 


Job brief: Sr. Software Engineer/Software Engineer


All of our team members are expected to learn, learn, and learn! We are working on cutting-edge technologies and areas of artificial intelligence that have never been explored before. We are looking for motivated software engineers with strong coding skills that want to work on problems and challenges they have never worked on before. All of our team members wear multiple hats so you will be expected to simultaneously work on multiple aspects of the products we ship.


Responsibilities

  • Participate heavily in the brainstorming of system architecture and feature design
  • Interface with external customers and key stakeholders to understand and document design requirements
  • Work cross-functionally with Engineering, Data Science, Product, UX, and Infrastructure teams
  • Drive best coding practices across the company (i.e. documentation, code reviews, coding standards, etc)
  • Perform security, legal, and license reviews of committed code
  • Complete projects with little or no supervision from senior leadership


Required Qualifications

  • Built and deployed customer-facing services and products at scale
  • Developed unit and integration tests
  • Worked on products where experimentation and data science are core to the development
  • Experience with large-scale distributed systems that have thousands of microservices and manages millions of transactions per day
  • Solid instruction-level understanding of Object Oriented design, data structures, and software engineering principles
  • Must have at least 4+ years of experience in back-end web development with the following tools: Python, Flask, FastAPI, AWS or Azure, GCP, Java or C/C++, ORM, Mongo, Postgres, TimescaleD, CI/CD


Desired Experience/Skills

  • You have a strong background in software development 
  • Experience with the following tools: Google Cloud Platform, Objective C/Swift, Github, Docker
  • Experience with open-source projects in a startup environment
  • BS, MS, or Ph.D. in Computer Science, Software Engineering, Math, Electrical Engineering, or other STEM degree


Read more
liquiloans
at liquiloans
5 recruiters
Vipin Kumar
Posted by Vipin Kumar
Mumbai
1 - 7 yrs
₹6L - ₹14L / yr
Data Science
Machine Learning (ML)
Python
Data Analytics
Work on the cutting edge FinTech landscape on problems of prediction and analytics.This position will work on internal and external data to draw insights into the bottomline for improving customer experience, credit decisioning and predictive maintainance of the platform.
Read more
VIMANA
at VIMANA
4 recruiters
Loshy Chandran
Posted by Loshy Chandran
Remote, Chennai
2 - 5 yrs
₹10L - ₹20L / yr
Data engineering
Data Engineer
Apache Kafka
Big Data
Java
+4 more

We are looking for passionate, talented and super-smart engineers to join our product development team. If you are someone who innovates, loves solving hard problems, and enjoys end-to-end product development, then this job is for you! You will be working with some of the best developers in the industry in a self-organising, agile environment where talent is valued over job title or years of experience.

 

Responsibilities:

  • You will be involved in end-to-end development of VIMANA technology, adhering to our development practices and expected quality standards.
  • You will be part of a highly collaborative Agile team which passionately follows SAFe Agile practices, including pair-programming, PR reviews, TDD, and Continuous Integration/Delivery (CI/CD).
  • You will be working with cutting-edge technologies and tools for stream processing using Java, NodeJS and Python, using frameworks like Spring, RxJS etc.
  • You will be leveraging big data technologies like Kafka, Elasticsearch and Spark, processing more than 10 Billion events per day to build a maintainable system at scale.
  • You will be building Domain Driven APIs as part of a micro-service architecture.
  • You will be part of a DevOps culture where you will get to work with production systems, including operations, deployment, and maintenance.
  • You will have an opportunity to continuously grow and build your capabilities, learning new technologies, languages, and platforms.

 

Requirements:

  • Undergraduate degree in Computer Science or a related field, or equivalent practical experience.
  • 2 to 5 years of product development experience.
  • Experience building applications using Java, NodeJS, or Python.
  • Deep knowledge in Object-Oriented Design Principles, Data Structures, Dependency Management, and Algorithms.
  • Working knowledge of message queuing, stream processing, and highly scalable Big Data technologies.
  • Experience in working with Agile software methodologies (XP, Scrum, Kanban), TDD and Continuous Integration (CI/CD).
  • Experience using no-SQL databases like MongoDB or Elasticsearch.
  • Prior experience with container orchestrators like Kubernetes is a plus.
About VIMANA

We build products and platforms for the Industrial Internet of Things. Our technology is being used around the world in mission-critical applications - from improving the performance of manufacturing plants, to making electric vehicles safer and more efficient, to making industrial equipment smarter.

Please visit https://govimana.com/ to learn more about what we do.

Why Explore a Career at VIMANA
  • We recognize that our dedicated team members make us successful and we offer competitive salaries.
  • We are a workplace that values work-life balance, provides flexible working hours, and full time remote work options.
  • You will be part of a team that is highly motivated to learn and work on cutting edge technologies, tools, and development practices.
  • Bon Appetit! Enjoy catered breakfasts, lunches and free snacks!

VIMANA Interview Process
We usually target to complete all the interviews in a week's time and would provide prompt feedback to the candidate. As of now, all the interviews are conducted online due to covid situation.

1.Telephonic screening (30 Min )

A 30 minute telephonic interview to understand and evaluate the candidate's fit with the job role and the company.
Clarify any queries regarding the job/company.
Give an overview about further interview rounds

2. Technical Rounds

This would be deep technical round to evaluate the candidate's technical capability pertaining to the job role.

3. HR Round

Candidate's team and cultural fit will be evaluated during this round

We would proceed with releasing the offer if the candidate clears all the above rounds.

Note: In certain cases, we might schedule additional rounds if needed before releasing the offer.
Read more
Hyderabad
2 - 4 yrs
₹10L - ₹15L / yr
Python
PySpark
Knowledge in AWS
  • Desire to explore new technology and break new ground.
  • Are passionate about Open Source technology, continuous learning, and innovation.
  • Have the problem-solving skills, grit, and commitment to complete challenging work assignments and meet deadlines.

Qualifications

  • Engineer enterprise-class, large-scale deployments, and deliver Cloud-based Serverless solutions to our customers.
  • You will work in a fast-paced environment with leading microservice and cloud technologies, and continue to develop your all-around technical skills.
  • Participate in code reviews and provide meaningful feedback to other team members.
  • Create technical documentation.
  • Develop thorough Unit Tests to ensure code quality.

Skills and Experience

  • Advanced skills in troubleshooting and tuning AWS Lambda functions developed with Java and/or Python.
  • Experience with event-driven architecture design patterns and practices
  • Experience in database design and architecture principles and strong SQL abilities
  • Message brokers like Kafka and Kinesis
  • Experience with Hadoop, Hive, and Spark (either PySpark or Scala)
  • Demonstrated experience owning enterprise-class applications and delivering highly available distributed, fault-tolerant, globally accessible services at scale.
  • Good understanding of distributed systems.
  • Candidates will be self-motivated and display initiative, ownership, and flexibility.

 

Preferred Qualifications

  • AWS Lambda function development experience with Java and/or Python.
  • Lambda triggers such as SNS, SES, or cron.
  • Databricks
  • Cloud development experience with AWS services, including:
  • IAM
  • S3
  • EC2
  • AWS CLI
  • API Gateway
  • ECR
  • CloudWatch
  • Glue
  • Kinesis
  • DynamoDB
  • Java 8 or higher
  • ETL data pipeline building
  • Data Lake Experience
  • Python
  • Docker
  • MongoDB or similar NoSQL DB.
  • Relational Databases (e.g., MySQL, PostgreSQL, Oracle, etc.).
  • Gradle and/or Maven.
  • JUnit
  • Git
  • Scrum
  • Experience with Unix and/or macOS.
  • Immediate Joiners

Nice to have:

  • AWS / GCP / Azure Certification.
  • Cloud development experience with Google Cloud or Azure

 

Read more
Bewakoof Brands Pvt Ltd
at Bewakoof Brands Pvt Ltd
2 recruiters
Sahil Khan
Posted by Sahil Khan
Mumbai
1 - 2 yrs
₹8L - ₹12L / yr
Data Science
R Programming
Tableau
1 to 3 years of experience in product analytics - Highly conversant with Google Analytics and other similar tools - Basic programming ability(preferably R or Python)
Read more
WyngCommerce
at WyngCommerce
3 recruiters
Ankit Jain
Posted by Ankit Jain
Bengaluru (Bangalore)
4 - 7 yrs
₹18L - ₹25L / yr
Data Science
Demand forecasting
Optimization
WyngCommerce is building state of the art AI software for the Global Consumer Brands & Retailers to enable best-in-class customer experiences. Our vision is to democratise machine learning algorithms for our customers and help them realise dramatic improvements in speed, cost and flexibility. Backed by a clutch of prominent angel investors & having some of the category leaders in the retail industry as clients, we are looking to hire for our data science team. The data science team at WyngCommerce is on a mission to challenge the norms and re-imagine how retail business should be run across the world. As a Senior Data Scientist in the team, you will be driving and owning the thought leadership and impact on one of our core data science problems. You will work collaboratively with the founders, clients and engineering team to formulate complex problems, run Exploratory Data Analysis and test hypotheses, implement ML-based solutions and fine tune them with more data. This is a high impact role with goals that directly impact our business. Your Role & Responsibilities - - Lead and Own the Thought Process on one or more of our core Data Science problems e.g. Product Clustering, Intertemporal Optimization, etc. - Actively participate and challenge assumptions in translating ambiguous business problems into one or more ML/optimization problems - Implement data-driven solutions based on advanced ML and optimization algorithms to address business problems - Research, experiment, and innovate ML/statistical approaches in various application areas of interest and contribute to IP - Partner with engineering teams to build scalable, efficient, automated ML-based pipelines (training/evaluation/monitoring) - Deploy, maintain, and debug ML/decision models in production environment - Analyze and assess data to ensure high data quality and correctness of downstream processes - Define and own metrics on solution quality, data quality and stability of ML pipelines - Communicate results to stakeholders and present data/insights to participate in and drive decision making Desired Skills & Experiences - - Bachelors or Masters in a quantitative field from a top tier college. - Minimum of 3+ years experience in a data science role in a technology company - Solid mathematical background (especially in linear algebra, probability theory, optimization theory, decision theory, operations research) - Familiarity with theoretical aspects of common ML techniques (generalized linear models, ensembles, SVMs, clustering algos, graphical models, etc.), statistical tests/metrics, experiment design, and evaluation methodologies - Solid foundation in data structures, algorithms, and programming language theory - Demonstrable track record of dealing with ambiguity, prioritizing needs, bias for iterative learning, and delivering results in a dynamic environment with minimal guidance - Hands-on experience in at least one of the focus areas of WyngCommerce Data Science team: (a) Product Clustering, (b) Demand Forecasting, (c) Intertemporal Optimization, (d) Reinforcement Learning, (e) Transfer Learning - Good programming skills (fluent in Java/Python/SQL) with experience of using common ML toolkits (e.g., sklearn, tensor flow, keras, nltk) to build models for real world problems - Computational thinking and familiarity with practical application requirements (e.g., latency, memory, processing time) - Experience using Cloud-based ML platforms (e.g., AWS Sagemaker, Azure ML), Cloud-based data storage, and deploying ML models in product environment in collaboration with engineering teams - Excellent written and verbal communication skills for both technical and non-technical audiences - (Plus Point) Experience of applying ML / other techniques in the domain of supply chain - and particularly in retail - for inventory optimization, demand forecasting, assortment planning, and other such problems - (Nice to have) Research experience and publications in top ML/Data science conferences
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos