Cutshort logo
Indix logo
Data Scientist
Data Scientist
Indix's logo

Data Scientist

Sri Devi's profile picture
Posted by Sri Devi
3 - 7 yrs
₹15L - ₹45L / yr
Chennai, Hyderabad
Skills
Data Science
Python
Algorithms
Data Structures
Scikit-Learn
Deep Learning
Spark
Natural Language Toolkit (NLTK)
Software Engineer – ML at Indix provides an opportunity to design and build systems that crunch large amounts of data everyday What We’re Looking For- 3+ years of experience Ability to propose hypothesis and design experiments in the context of specific problems. Should come from a strong engineering background Good overlap with Indix Data tech stack such as Hadoop, MapReduce, HDFS, Spark, Scalding, Scala/Python/C++ Dedication and diligence in understanding the application domain, collecting/cleaning data and conducting experiments. Creativity in model and algorithm development. An obsession to develop algorithms/models that directly impact business. Master’s/Phd. in Computer Science/Statistics is a plus Job Expectations Experience working in text mining and python libraries like scikit-learn, numpy, etc Collect relevant data from production systems/Use crawling and parsing infrastructure to put together data sets. Survey academic literature and identify potential approaches for exploration. Craft, conduct and analyze experiments to evaluate models/algorithms. Communicate findings and take algorithms/models to production with end to end ownership.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Indix

Founded :
2010
Type
Size
Stage :
Raised funding
About
Indix is building an index for over 1 billion consumer products in the world. Claims to have a coverage on products across all retail vertical except auto, books and musics with 10,000+ product attributes to classify them. On top of its product database, Indix offers customized apps and APIs for retailers to tap into the platform and gain real time insights based on their preferences. Has 20+ customers across brands, technology firms and merchants. 
Read more
Connect with the team
Profile picture
Sri Devi
Company social profiles
blogtwitterfacebook

Similar jobs

Bengaluru (Bangalore)
5 - 10 yrs
Best in industry
ETL
Informatica
Data Warehouse (DWH)
PowerBI
databricks
+4 more

About The Company


 The client is 17-year-old Multinational Company headquartered in Bangalore, Whitefield, and having another delivery center in Pune, Hinjewadi. It also has offices in US and Germany and are working with several OEM’s and Product Companies in about 12 countries and is a 200+ strong team worldwide. 


The Role


Power BI front-end developer in the Data Domain (Manufacturing, Sales & Marketing, Purchasing, Logistics, …).Responsible for the Power BI front-end design, development, and delivery of highly visible data-driven applications in the Compressor Technique. You always take a quality-first approach where you ensure the data is visualized in a clear, accurate, and user-friendly manner. You always ensure standards and best practices are followed and ensure documentation is created and maintained. Where needed, you take initiative and make

recommendations to drive improvements. In this role you will also be involved in the tracking, monitoring and performance analysis

of production issues and the implementation of bugfixes and enhancements.


Skills & Experience


• The ideal candidate has a degree in Computer Science, Information Technology or equal through experience.

• Strong knowledge on BI development principles, time intelligence, functions, dimensional modeling and data visualization is required.

• Advanced knowledge and 5-10 years experience with professional BI development & data visualization is preferred.

• You are familiar with data warehouse concepts.

• Knowledge on MS Azure (data lake, databricks, SQL) is considered as a plus.

• Experience and knowledge on scripting languages such as PowerShell and Python to setup and automate Power BI platform related activities is an asset.

• Good knowledge (oral and written) of English is required.

Read more
Monsoonfish
Medha Sahai
Posted by Medha Sahai
Remote only
3 - 7 yrs
₹4L - ₹7L / yr
SQL
PostgreSQL
Python
Scala
Julia
+3 more

Role: 

We're seeking a proactive leader to head a team and tackle complex business logic problems. This role requires an individual who can contribute independently, providing valuable insights rather than needing constant guidance. We value the depth of knowledge and hands-on experience over spoon-fed solutions. 


Technical Requirements: 

Proficiency in data processing technologies, including SQL and PostgreSQL, with the ability to write complex queries. Strong coding skills in Python, with hands-on experience in developing solutions. Knowledge of Scala or Julia is advantageous. Experience leading teams and working on projects from inception. Must be Familiar with ML with frameworks such as TensorFlow, OpenAI, LLP, and BUD. Some exposure to Airflow is preferred. Must have knowledge and Hands-on experience with DevOps coding and migration. Must have knowledge of Flask or FastAPI. Experience working with both Linux and Windows environments. 


Experience:5+ Years. 



(Note: The resource should have hands-on exp with all the must-have skills mentioned above)



Read more
SteelEye
at SteelEye
1 video
3 recruiters
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
1 - 8 yrs
₹10L - ₹40L / yr
Python
ETL
Jenkins
CI/CD
pandas
+6 more
Roles & Responsibilties
Expectations of the role
This role will be reporting into Technical Lead (Support). You will be expected to resolve bugs in the platform that are identified by Customers and Internal Teams. This role will progress towards SDE-2 in 12-15 months where the developer will be working on solving complex problems around scale and building out new features.
 
What will you do?
  • Fix issues with plugins for our Python-based ETL pipelines
  • Help with automation of standard workflow
  • Deliver Python microservices for provisioning and managing cloud infrastructure
  • Responsible for any refactoring of code
  • Effectively manage challenges associated with handling large volumes of data working to tight deadlines
  • Manage expectations with internal stakeholders and context-switch in a fast-paced environment
  • Thrive in an environment that uses AWS and Elasticsearch extensively
  • Keep abreast of technology and contribute to the engineering strategy
  • Champion best development practices and provide mentorship to others
What are we looking for?
  • First and foremost you are a Python developer, experienced with the Python Data stack
  • You love and care about data
  • Your code is an artistic manifest reflecting how elegant you are in what you do
  • You feel sparks of joy when a new abstraction or pattern arises from your code
  • You support the manifests DRY (Don’t Repeat Yourself) and KISS (Keep It Short and Simple)
  • You are a continuous learner
  • You have a natural willingness to automate tasks
  • You have critical thinking and an eye for detail
  • Excellent ability and experience of working to tight deadlines
  • Sharp analytical and problem-solving skills
  • Strong sense of ownership and accountability for your work and delivery
  • Excellent written and oral communication skills
  • Mature collaboration and mentoring abilities
  • We are keen to know your digital footprint (community talks, blog posts, certifications, courses you have participated in or you are keen to, your personal projects as well as any kind of contributions to the open-source communities if any)
Nice to have:
  • Delivering complex software, ideally in a FinTech setting
  • Experience with CI/CD tools such as Jenkins, CircleCI
  • Experience with code versioning (git / mercurial / subversion)
Read more
Molecular Connections
at Molecular Connections
4 recruiters
Molecular Connections
Posted by Molecular Connections
Bengaluru (Bangalore)
8 - 10 yrs
₹15L - ₹20L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+4 more
  1. Big data developer with 8+ years of professional IT experience with expertise in Hadoop ecosystem components in ingestion, Data modeling, querying, processing, storage, analysis, Data Integration and Implementing enterprise level systems spanning Big Data.
  2. A skilled developer with strong problem solving, debugging and analytical capabilities, who actively engages in understanding customer requirements.
  3. Expertise in Apache Hadoop ecosystem components like Spark, Hadoop Distributed File Systems(HDFS), HiveMapReduce, Hive, Sqoop, HBase, Zookeeper, YARN, Flume, Pig, Nifi, Scala and Oozie.
  4. Hands on experience in creating real - time data streaming solutions using Apache Spark core, Spark SQL & DataFrames, Kafka, Spark streaming and Apache Storm.
  5. Excellent knowledge of Hadoop architecture and daemons of Hadoop clusters, which include Name node,Data node, Resource manager, Node Manager and Job history server.
  6. Worked on both Cloudera and Horton works in Hadoop Distributions. Experience in managing Hadoop clustersusing Cloudera Manager tool.
  7. Well versed in installation, Configuration, Managing of Big Data and underlying infrastructure of Hadoop Cluster.
  8. Hands on experience in coding MapReduce/Yarn Programs using Java, Scala and Python for analyzing Big Data.
  9. Exposure to Cloudera development environment and management using Cloudera Manager.
  10. Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle .
  11. Implemented Spark using PYTHON and utilizing Data frames and Spark SQL API for faster processing of data and handled importing data from different data sources into HDFS using Sqoop and performing transformations using Hive, MapReduce and then loading data into HDFS.
  12. Used Spark Data Frames API over Cloudera platform to perform analytics on Hive data.
  13. Hands on experience in MLlib from Spark which are used for predictive intelligence, customer segmentation and for smooth maintenance in Spark streaming.
  14. Experience in using Flume to load log files into HDFS and Oozie for workflow design and scheduling.
  15. Experience in optimizing MapReduce jobs to use HDFS efficiently by using various compression mechanisms.
  16. Working on creating data pipeline for different events of ingestion, aggregation, and load consumer response data into Hive external tables in HDFS location to serve as feed for tableau dashboards.
  17. Hands on experience in using Sqoop to import data into HDFS from RDBMS and vice-versa.
  18. In-depth Understanding of Oozie to schedule all Hive/Sqoop/HBase jobs.
  19. Hands on expertise in real time analytics with Apache Spark.
  20. Experience in converting Hive/SQL queries into RDD transformations using Apache Spark, Scala and Python.
  21. Extensive experience in working with different ETL tool environments like SSIS, Informatica and reporting tool environments like SQL Server Reporting Services (SSRS).
  22. Experience in Microsoft cloud and setting cluster in Amazon EC2 & S3 including the automation of setting & extending the clusters in AWS Amazon cloud.
  23. Extensively worked on Spark using Python on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL.
  24. Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
  25. Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions and on Amazon web services (AWS).
  26. Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
  27. Experience in creating Data frames using PySpark and performing operation on the Data frames using Python.
  28. In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS and MapReduce Programming Paradigm, High Availability and YARN architecture.
  29. Establishing multiple connections to different Redshift clusters (Bank Prod, Card Prod, SBBDA Cluster) and provide the access for pulling the information we need for analysis. 
  30. Generated various kinds of knowledge reports using Power BI based on Business specification. 
  31. Developed interactive Tableau dashboards to provide a clear understanding of industry specific KPIs using quick filters and parameters to handle them more efficiently.
  32. Well Experience in projects using JIRA, Testing, Maven and Jenkins build tools.
  33. Experienced in designing, built, and deploying and utilizing almost all the AWS stack (Including EC2, S3,), focusing on high-availability, fault tolerance, and auto-scaling.
  34. Good experience with use-case development, with Software methodologies like Agile and Waterfall.
  35. Working knowledge of Amazon's Elastic Cloud Compute( EC2 ) infrastructure for computational tasks and Simple Storage Service ( S3 ) as Storage mechanism.
  36. Good working experience in importing data using Sqoop, SFTP from various sources like RDMS, Teradata, Mainframes, Oracle, Netezza to HDFS and performed transformations on it using Hive, Pig and Spark .
  37. Extensive experience in Text Analytics, developing different Statistical Machine Learning solutions to various business problems and generating data visualizations using Python and R.
  38. Proficient in NoSQL databases including HBase, Cassandra, MongoDB and its integration with Hadoop cluster.
  39. Hands on experience in Hadoop Big data technology working on MapReduce, Pig, Hive as Analysis tool, Sqoop and Flume data import/export tools.
Read more
liquiloans
at liquiloans
5 recruiters
Dhirendra Singh
Posted by Dhirendra Singh
Mumbai
4 - 8 yrs
₹20L - ₹25L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+4 more
 Role: AVP Analytics
 The AVP Analytics will have the following responsibilities:
 • Analyze data across all verticals in the organization to come up with actionable insights and recommendations 
• Conceptualize and define key metrics to track for all functions within the organization
 • Create strategic roadmap and business cases to enter into new products/lines of business
 • Build, develop and maintain dashboards and performance metrics support that support key business decisions. 
• Lead cross-functional projects using advanced data analytics to discover insights that will guide strategic decisions and uncover optimization opportunities. 
• Organize and drive successful completion of data insight initiatives through effective management of analyst and data employees and effective collaboration with stakeholders. 
• Communicate results and business impacts of insight initiatives to stakeholders within and outside of the company.
 • Recruit, train, develop and supervise analyst-level employees. 
• Anticipate future demands of initiatives related to people, technology, budget and business within your department and design/implement solutions to meet these needs. 
 
 Founding Team
 1. Achal Mittal (https://www.linkedin.com/in/achal-mittal-8a95993a/" target="_blank">https://www.linkedin.com/in/achal-mittal-8a95993a/): NMIMS Alum, Ex ICICI and HSBC and co-founded Rentomojo 
 2. Gautam Adukia (https://www.linkedin.com/in/gautam-adukia-415a0650/" target="_blank">https://www.linkedin.com/in/gautam-adukia-415a0650/): IIM Alum, Ex IIFL
 Funding: Raised Seed Round funding from Matrix Partners  
Read more
Quadratic Insights
Praveen Kondaveeti
Posted by Praveen Kondaveeti
Hyderabad
7 - 10 yrs
₹15L - ₹24L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+6 more

About Quadratyx:

We are a product-centric insight & automation services company globally. We help the world’s organizations make better & faster decisions using the power of insight & intelligent automation. We build and operationalize their next-gen strategy, through Big Data, Artificial Intelligence, Machine Learning, Unstructured Data Processing and Advanced Analytics. Quadratyx can boast more extensive experience in data sciences & analytics than most other companies in India.

We firmly believe in Excellence Everywhere.


Job Description

Purpose of the Job/ Role:

• As a Technical Lead, your work is a combination of hands-on contribution, customer engagement and technical team management. Overall, you’ll design, architect, deploy and maintain big data solutions.


Key Requisites:

• Expertise in Data structures and algorithms.

• Technical management across the full life cycle of big data (Hadoop) projects from requirement gathering and analysis to platform selection, design of the architecture and deployment.

• Scaling of cloud-based infrastructure.

• Collaborating with business consultants, data scientists, engineers and developers to develop data solutions.

• Led and mentored a team of data engineers.

• Hands-on experience in test-driven development (TDD).

• Expertise in No SQL like Mongo, Cassandra etc, preferred Mongo and strong knowledge of relational databases.

• Good knowledge of Kafka and Spark Streaming internal architecture.

• Good knowledge of any Application Servers.

• Extensive knowledge of big data platforms like Hadoop; Hortonworks etc.

• Knowledge of data ingestion and integration on cloud services such as AWS; Google Cloud; Azure etc. 


Skills/ Competencies Required

Technical Skills

• Strong expertise (9 or more out of 10) in at least one modern programming language, like Python, or Java.

• Clear end-to-end experience in designing, programming, and implementing large software systems.

• Passion and analytical abilities to solve complex problems Soft Skills.

• Always speaking your mind freely.

• Communicating ideas clearly in talking and writing, integrity to never copy or plagiarize intellectual property of others.

• Exercising discretion and independent judgment where needed in performing duties; not needing micro-management, maintaining high professional standards.


Academic Qualifications & Experience Required

Required Educational Qualification & Relevant Experience

• Bachelor’s or Master’s in Computer Science, Computer Engineering, or related discipline from a well-known institute.

• Minimum 7 - 10 years of work experience as a developer in an IT organization (preferably Analytics / Big Data/ Data Science / AI background.

Read more
CES Information Technologies
Yash Rathod
Posted by Yash Rathod
Hyderabad
7 - 12 yrs
₹5L - ₹15L / yr
Machine Learning (ML)
Deep Learning
Python
Data modeling
o Critical thinking mind who likes to solve complex problems, loves programming, and cherishes to work in a fast-paced environment.
o Strong Python development skills, with 7+ yrs. experience with SQL.
o A bachelor or master’s degree in Computer Science or related areas
o 5+ years of experience in data integration and pipeline development
o Experience in Implementing Databricks Delta lake and data lake
o Expertise designing and implementing data pipelines using modern data engineering approach and tools: SQL, Python, Delta Lake, Databricks, Snowflake Spark
o Experience in working with multiple file formats (Parque, Avro, Delta Lake) & API
o experience with AWS Cloud on data integration with S3.
o Hands on Development experience with Python and/or Scala.
o Experience with SQL and NoSQL databases.
o Experience in using data modeling techniques and tools (focused on Dimensional design)
o Experience with micro-service architecture using Docker and Kubernetes
o Have experience working with one or more of the public cloud providers i.e. AWS, Azure or GCP
o Experience in effectively presenting and summarizing complex data to diverse audiences through visualizations and other means
o Excellent verbal and written communications skills and strong leadership capabilities

Skills:
ML
MOdelling
Python
SQL
Azure Data Lake, dataFactory, Databricks, Delta Lake
Read more
Fragma Data Systems
at Fragma Data Systems
8 recruiters
Evelyn Charles
Posted by Evelyn Charles
Remote, Bengaluru (Bangalore), Hyderabad
0 - 1 yrs
₹3L - ₹3.5L / yr
SQL
Data engineering
Data Engineer
Python
Big Data
+1 more
Strong Programmer with expertise in Python and SQL
 
● Hands-on Work experience in SQL/PLSQL
● Expertise in at least one popular Python framework (like Django,
Flask or Pyramid)
● Knowledge of object-relational mapping (ORM)
● Familiarity with front-end technologies (like JavaScript and HTML5)
● Willingness to learn & upgrade to Big data and cloud technologies
like Pyspark Azure etc.
● Team spirit
● Good problem-solving skills
● Write effective, scalable code
Read more
Couture.ai
at Couture.ai
4 recruiters
Shobhit Agarwal
Posted by Shobhit Agarwal
Bengaluru (Bangalore)
3 - 8 yrs
₹30L - ₹40L / yr
Deep Learning
TensorFlow
Data Science
Artificial Neural Network (ANN)
Scala
+2 more
Looking for senior data science researchers.

Basic Qualifications:
∙Bachelors in Computer Science/Mathematics + Research (Machine Learning, Deep Learning, Statistics, Data Mining, Game Theory or core mathematical areas) from Tier1 tech institutes.
∙3+ years of relevant experience in building large scale machine learning or deep learning models and/or systems.
∙1 year or more of experience specifically with deep learning (CNN, RNN, LSTM, RBM etc).
∙Strong working knowledge of deep learning, machine learning, and statistics.
- Deep domain understanding of Personalization, Search and Visual.
∙Strong math skills with statistical modeling / machine learning.
∙Hands-on experience building models with deep learning frameworks like MXNet or Tensorflow.
∙Experience in using Python, statistical/machine learning libs.
∙Ability to think creatively and solve problems.
∙Data presentation skills.

Preferred:
∙MS/ Ph.D. (Machine Learning, Deep Learning, Statistics, Data Mining, Game Theory or core mathematical areas) from IISc and other Top Global Universities.
∙Or, Publications in highly accredited journals (If available, please share links to your published work.).
∙Or, history of scaling ML/Deep learning algorithm at massively large scale.
Read more
Thrymr Software
at Thrymr Software
3 recruiters
Sharmistha Debnath
Posted by Sharmistha Debnath
Hyderabad
0 - 2 yrs
₹4L - ₹5L / yr
R Programming
Python
Data Science
Company Profile: Thrymr Software is an outsourced product development startup. Our primary development center is in Hyderabad, India with a team of about 100+ members across various technical roles. Thrymr is also in Singapore, Hamburg (Germany) and Amsterdam (Netherlands). Thrymr works with companies to take complete ownership of building their end to end products be it web or mobile applications or advanced analytics including GIS, machine learning or computer vision. http://thrymr.net Job Location Hyderabad, Financial District Job Description: As a Data Scientist, you will evaluate and improve products. You will collaborate with a multidisciplinary team of engineers and analysts on a wide range of problems. As a Data Scientist your responsibility will be (but not limited to) Design, benchmark and tune machine-learning algorithms,Painlessly and securely manipulate large and complex relational data sets, Assemble complex machine-learning strategies, Build new predictive apps and services Responsibilities: 1. Work with large, complex datasets. Solve difficult, non-routine analysis problems, applying advanced analytical methods as needed. Conduct end-to-end analysis that includes data gathering and requirements specification, processing, analysis, ongoing deliverables, and presentations. 2. Build and prototype analysis pipelines iteratively to provide insights at scale. Develop a comprehensive understanding of Google data ​structures and metrics, advocating for changes were needed for both products development and sales activity. 3. Interact cross-functionally with a wide variety of people and teams. Work closely with engineers to identify opportunities for, design, and assess improvements to various products. 4. Make business recommendations (e.g. cost-benefit, forecasting, experiment analysis) with effective presentations of findings at multiple levels of stakeholders through visual displays of quantitative information. 5. Research and develop analysis, forecasting, and optimization methods to improve the quality of uuser-facingproducts; example application areas include ads quality, search quality, end-user behavioral modeling, and live experiments. 6. Apart from the core data science work (subjected to projects and availability), you may also be required to contribute to regular software development work such as web, backend and mobile applications development. Our Expectations 1. You should have at least B.Tech/B.Sc degree or equivalent practical experience (e.g., statistics, operations research, bioinformatics, economics, computational biology, computer science, mathematics, physics, electrical engineering, industrial engineering). 2. You should have practical experience of working experience with statistical packages (e.g., R, Python, NumPy ) ​and databases (e.g., SQL). 3. You should have experience articulating business questions and using mathematical techniques to arrive at an answer using available data​. Experience translating analysis results into business recommendations. 4. You should have strong analytical and research skills 5. You should have good academics 6. You will have to very proactive and submit your daily/weekly reports very diligently 7. You should be comfortable with working exceptionally hard as we are a startup and this is a high-performance work environment. 8. This is NOT a 9 to 5 ​kind of job, you should be able to work long hours. What you can expect 1. High-performance work culture 2. Short-term travel across the globe at very short notices 3. Accelerated learning (you will learn at least thrice as much compared to other companies in similar roles) and become a lot more technical 4. Happy go lucky team with zero politics 5. Zero tolerance for unprofessional behavior and substandard performance 6. Performance-based appraisals that can happen anytime with considerable hikes compared to single-digit annual hikes as the market standard
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos