Cutshort logo
EnterpriseMinds logo
Data Scientist-Job ID: ZS0701
Data Scientist-Job ID: ZS0701
EnterpriseMinds's logo

Data Scientist-Job ID: ZS0701

phani kalyan's profile picture
Posted by phani kalyan
3 - 7.5 yrs
₹10L - ₹25L / yr
Bengaluru (Bangalore)
Skills
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Spark
Software deployment
PySpark
Job ID: ZS0701

Hi,

We are hiring for Data Scientist for Bangalore.

Req Skills:

  • NLP 
  • ML programming
  • Spark
  • Model Deployment
  • Experience processing unstructured data and building NLP models
  • Experience with big data tools pyspark
  • Pipeline orchestration using Airflow and model deployment experience is preferred
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About EnterpriseMinds

Founded :
2017
Type
Size :
100-1000
Stage :
Profitable
About

Enterprise Minds, with core focus on engineering products, automation and intelligence, partners customers on the trajectory towards increasing outcomes, relevance and growth.


Harnessing the power of Data and the forces that define AI, Machine Learning and Data Science, we believe in institutionalising go-to-market models and not just explore possibilities.


We believe in a customer-centric ethic without and people-centric paradigm within. With a strong sense of community, ownership and collaboration our people work in a spirit of co-creation, co-innovation and co-development to engineer next-generation software products with the help of accelerators.


Through Communities we connect and attract talent that shares skills and expertise. Through Innovation Labs and global design studios we deliver creative solutions.


We create vertical isolated pods which has narrow but deep focus. We also create horizontal pods to collaborate and deliver sustainable outcomes.


We follow Agile methodologies to fail fast and deliver scalable and modular solutions. We constantly self-asses and realign to work with each customer in the most impactful manner.

Read more
Photos
Company featured pictures
Connect with the team
Profile picture
Nikita Aher
Profile picture
phani kalyan
Company social profiles
N/A

Similar jobs

Leading Fleet Mgmt. Platform
Agency job
via Qrata by Blessy Fernandes
Remote only
4 - 8 yrs
₹20L - ₹45L / yr
Data engineering
Apache Kafka
Spark
data engineer
Big Data
+2 more
Required Skills
Experience with various stream processing and batch processing tools (Kafka,
Spark etc). Programming with Python.
● Experience with relational and non-relational databases.
● Fairly good understanding of AWS (or any equivalent).


Key Responsibilities
● Design new systems and redesign existing systems to work at scale.
● Care about things like fault tolerance, durability, backups and recovery,
performance, maintainability, code simplicity etc.
● Lead a team of software engineers and help create an environment of ownership
and learning.
● Introduce best practices of software development and ensure their adoption
across the team.
● Help set and maintain coding standards for the team.
Read more
Curl
at Curl
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
5 - 10 yrs
₹10L - ₹25L / yr
Data Visualization
PowerBI
ETL
Business Intelligence (BI)
Data Analytics
+6 more
Main Responsibilities:

 Work closely with different Front Office and Support Function stakeholders including but not restricted to Business
Management, Accounts, Regulatory Reporting, Operations, Risk, Compliance, HR on all data collection and reporting use cases.
 Collaborate with Business and Technology teams to understand enterprise data, create an innovative narrative to explain, engage and enlighten regular staff members as well as executive leadership with data-driven storytelling
 Solve data consumption and visualization through data as a service distribution model
 Articulate findings clearly and concisely for different target use cases, including through presentations, design solutions, visualizations
 Perform Adhoc / automated report generation tasks using Power BI, Oracle BI, Informatica
 Perform data access/transfer and ETL automation tasks using Python, SQL, OLAP / OLTP, RESTful APIs, and IT tools (CFT, MQ-Series, Control-M, etc.)
 Provide support and maintain the availability of BI applications irrespective of the hosting location
 Resolve issues escalated from Business and Functional areas on data quality, accuracy, and availability, provide incident-related communications promptly
 Work with strict deadlines on high priority regulatory reports
 Serve as a liaison between business and technology to ensure that data related business requirements for protecting sensitive data are clearly defined, communicated, and well understood, and considered as part of operational
prioritization and planning
 To work for APAC Chief Data Office and coordinate with a fully decentralized team across different locations in APAC and global HQ (Paris).

General Skills:
 Excellent knowledge of RDBMS and hands-on experience with complex SQL is a must, some experience in NoSQL and Big Data Technologies like Hive and Spark would be a plus
 Experience with industrialized reporting on BI tools like PowerBI, Informatica
 Knowledge of data related industry best practices in the highly regulated CIB industry, experience with regulatory report generation for financial institutions
 Knowledge of industry-leading data access, data security, Master Data, and Reference Data Management, and establishing data lineage
 5+ years experience on Data Visualization / Business Intelligence / ETL developer roles
 Ability to multi-task and manage various projects simultaneously
 Attention to detail
 Ability to present to Senior Management, ExCo; excellent written and verbal communication skills
Read more
Exponentia.ai
at Exponentia.ai
1 product
1 recruiter
Vipul Tiwari
Posted by Vipul Tiwari
Mumbai
4 - 6 yrs
₹12L - ₹19L / yr
ETL
Informatica
Data Warehouse (DWH)
databricks
Amazon Web Services (AWS)
+6 more

 Job DescriptionPosition: Sr Data Engineer – Databricks & AWS

Experience: 4 - 5 Years

 

Company Profile:


Exponentia.ai is an AI tech organization with a presence across India, Singapore, the Middle East, and the UK. We are an innovative and disruptive organization, working on cutting-edge technology to help our clients transform into the enterprises of the future. We provide artificial intelligence-based products/platforms capable of automated cognitive decision-making to improve productivity, quality, and economics of the underlying business processes. Currently, we are transforming ourselves and rapidly expanding our business.

Exponentia.ai has developed long-term relationships with world-class clients such as PayPal, PayU, SBI Group, HDFC Life, Kotak Securities, Wockhardt and Adani Group amongst others.

One of the top partners of Cloudera (leading analytics player) and Qlik (leader in BI technologies), Exponentia.ai has recently been awarded the ‘Innovation Partner Award’ by Qlik in 2017.

Get to know more about us on our website: http://www.exponentia.ai/ and Life @Exponentia.

 

​Role Overview: 


·         A Data Engineer understands the client requirements and develops and delivers the data engineering solutions as per the scope.

·         The role requires good skills in the development of solutions using various services required for data architecture on Databricks Delta Lake, streaming, AWS, ETL Development, and data modeling.

 

Job Responsibilities


•         Design of data solutions on Databricks including delta lake, data warehouse, data marts and other data solutions to support the analytics needs of the organization.

•         Apply best practices during design in data modeling (logical, physical) and ETL pipelines (streaming and batch) using cloud-based services.

•         Design, develop and manage the pipelining (collection, storage, access), data engineering (data quality, ETL, Data Modelling) and understanding (documentation, exploration) of the data.

•         Interact with stakeholders regarding data landscape understanding, conducting discovery exercises, developing proof of concepts and demonstrating it to stakeholders.

 

Technical Skills 


•         Has more than 2 Years of experience in developing data lakes, and datamarts on the Databricks platform.

•         Proven skill sets in AWS Data Lake services such as - AWS Glue, S3, Lambda, SNS, IAM, and skills in Spark, Python, and SQL.

•         Experience in Pentaho

•         Good understanding of developing a data warehouse, data marts etc.

•         Has a good understanding of system architectures, and design patterns and should be able to design and develop applications using these principles.

 

Personality Traits


•         Good collaboration and communication skills

•         Excellent problem-solving skills to be able to structure the right analytical solutions.

•         Strong sense of teamwork, ownership, and accountability

•         Analytical and conceptual thinking 

•         Ability to work in a fast-paced environment with tight schedules.

•         Good presentation skills with the ability to convey complex ideas to peers and management.

 

Education:

 

BE / ME / MS/MCA.

    



Read more
Telstra
at Telstra
1 video
1 recruiter
Mahesh Balappa
Posted by Mahesh Balappa
Bengaluru (Bangalore), Hyderabad, Pune
3 - 7 yrs
Best in industry
Spark
Hadoop
NOSQL Databases
Apache Kafka

About Telstra

 

Telstra is Australia’s leading telecommunications and technology company, with operations in more than 20 countries, including In India where we’re building a new Innovation and Capability Centre (ICC) in Bangalore.

 

We’re growing, fast, and for you that means many exciting opportunities to develop your career at Telstra. Join us on this exciting journey, and together, we’ll reimagine the future.

 

Why Telstra?

 

  • We're an iconic Australian company with a rich heritage that's been built over 100 years. Telstra is Australia's leading Telecommunications and Technology Company. We've been operating internationally for more than 70 years.
  • International presence spanning over 20 countries.
  • We are one of the 20 largest telecommunications providers globally
  • At Telstra, the work is complex and stimulating, but with that comes a great sense of achievement. We are shaping the tomorrow's modes of communication with our innovation driven teams.

 

Telstra offers an opportunity to make a difference to lives of millions of people by providing the choice of flexibility in work and a rewarding career that you will be proud of!

 

About the team

Being part of Networks & IT means you'll be part of a team that focuses on extending our network superiority to enable the continued execution of our digital strategy.

With us, you'll be working with world-leading technology and change the way we do IT to ensure business needs drive priorities, accelerating our digitisation programme.

 

Focus of the role

Any new engineer who comes into data chapter would be mostly into developing reusable data processing and storage frameworks that can be used across data platform.

 

About you

To be successful in the role, you'll bring skills and experience in:-

 

Essential 

  • Hands-on experience in Spark Core, Spark SQL, SQL/Hive/Impala, Git/SVN/Any other VCS and Data warehousing
  • Skilled in the Hadoop Ecosystem(HDP/Cloudera/MapR/EMR etc)
  • Azure data factory/Airflow/control-M/Luigi
  • PL/SQL
  • Exposure to NOSQL(Hbase/Cassandra/GraphDB(Neo4J)/MongoDB)
  • File formats (Parquet/ORC/AVRO/Delta/Hudi etc.)
  • Kafka/Kinesis/Eventhub

 

Highly Desirable

Experience and knowledgeable on the following:

  • Spark Streaming
  • Cloud exposure (Azure/AWS/GCP)
  • Azure data offerings - ADF, ADLS2, Azure Databricks, Azure Synapse, Eventhubs, CosmosDB etc.
  • Presto/Athena
  • Azure DevOps
  • Jenkins/ Bamboo/Any similar build tools
  • Power BI
  • Prior experience in building or working in team building reusable frameworks,
  • Data modelling.
  • Data Architecture and design principles. (Delta/Kappa/Lambda architecture)
  • Exposure to CI/CD
  • Code Quality - Static and Dynamic code scans
  • Agile SDLC      

 

If you've got a passion to innovate, succeed as part of a great team, and looking for the next step in your career, we'd welcome you to apply!

___________________________

 

We’re committed to building a diverse and inclusive workforce in all its forms. We encourage applicants from diverse gender, cultural and linguistic backgrounds and applicants who may be living with a disability. We also offer flexibility in all our roles, to ensure everyone can participate.

To learn more about how we support our people, including accessibility adjustments we can provide you through the recruitment process, visit tel.st/thrive.

Read more
Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹35L / yr
Big Data
Data engineering
Big Data Engineering
Data Engineer
ETL
+5 more

Data Engineer JD:

  • Designing, developing, constructing, installing, testing and maintaining the complete data management & processing systems.
  • Building highly scalable, robust, fault-tolerant, & secure user data platform adhering to data protection laws.
  • Taking care of the complete ETL (Extract, Transform & Load) process.
  • Ensuring architecture is planned in such a way that it meets all the business requirements.
  • Exploring new ways of using existing data, to provide more insights out of it.
  • Proposing ways to improve data quality, reliability & efficiency of the whole system.
  • Creating data models to reduce system complexity and hence increase efficiency & reduce cost.
  • Introducing new data management tools & technologies into the existing system to make it more efficient.
  • Setting up monitoring and alarming on data pipeline jobs to detect failures and anomalies

What do we expect from you?

  • BS/MS in Computer Science or equivalent experience
  • 5 years of recent experience in Big Data Engineering.
  • Good experience in working with Hadoop and Big Data technologies like HDFS, Pig, Hive, Zookeeper, Storm, Spark, Airflow and NoSQL systems
  • Excellent programming and debugging skills in Java or Python.
  • Apache spark, python, hands on experience in deploying ML models
  • Has worked on streaming and realtime pipelines
  • Experience with Apache Kafka or has worked with any of Spark Streaming, Flume or Storm

 

 

 

 

 

 

 

 

 

 

 

 

Focus Area:

 

R1

Data structure & Algorithms

R2

Problem solving + Coding

R3

Design (LLD)

 

Read more
Bengaluru (Bangalore)
2 - 3 yrs
₹15L - ₹20L / yr
Python
Scala
Hadoop
Spark
Data Engineer
+4 more
  • We are looking for a Data Engineer to build the next-generation mobile applications for our world-class fintech product.
  • The candidate will be responsible for expanding and optimising our data and data pipeline architecture, as well as optimising data flow and collection for cross-functional teams.
  • The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimising data systems and building them from the ground up.
  • Looking for a person with a strong ability to analyse and provide valuable insights to the product and business team to solve daily business problems.
  • You should be able to work in a high-volume environment, have outstanding planning and organisational skills.

 

Qualifications for Data Engineer

 

  • Working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimising ‘big data’ data pipelines, architectures, and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • Looking for a candidate with 2-3 years of experience in a Data Engineer role, who is a CS graduate or has an equivalent experience.

 

What we're looking for?

 

  • Experience with big data tools: Hadoop, Spark, Kafka and other alternate tools.
  • Experience with relational SQL and NoSQL databases, including MySql/Postgres and Mongodb.
  • Experience with data pipeline and workflow management tools: Luigi, Airflow.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift.
  • Experience with stream-processing systems: Storm, Spark-Streaming.
  • Experience with object-oriented/object function scripting languages: Python, Java, Scala.
Read more
netmedscom
at netmedscom
3 recruiters
Vijay Hemnath
Posted by Vijay Hemnath
Chennai
5 - 10 yrs
₹10L - ₹30L / yr
Machine Learning (ML)
Software deployment
CI/CD
Cloud Computing
Snow flake schema
+19 more

We are looking for an outstanding ML Architect (Deployments) with expertise in deploying Machine Learning solutions/models into production and scaling them to serve millions of customers. A candidate with an adaptable and productive working style which fits in a fast-moving environment.

 

Skills:

- 5+ years deploying Machine Learning pipelines in large enterprise production systems.

- Experience developing end to end ML solutions from business hypothesis to deployment / understanding the entirety of the ML development life cycle.
- Expert in modern software development practices; solid experience using source control management (CI/CD).
- Proficient in designing relevant architecture / microservices to fulfil application integration, model monitoring, training / re-training, model management, model deployment, model experimentation/development, alert mechanisms.
- Experience with public cloud platforms (Azure, AWS, GCP).
- Serverless services like lambda, azure functions, and/or cloud functions.
- Orchestration services like data factory, data pipeline, and/or data flow.
- Data science workbench/managed services like azure machine learning, sagemaker, and/or AI platform.
- Data warehouse services like snowflake, redshift, bigquery, azure sql dw, AWS Redshift.
- Distributed computing services like Pyspark, EMR, Databricks.
- Data storage services like cloud storage, S3, blob, S3 Glacier.
- Data visualization tools like Power BI, Tableau, Quicksight, and/or Qlik.
- Proven experience serving up predictive algorithms and analytics through batch and real-time APIs.
- Solid working experience with software engineers, data scientists, product owners, business analysts, project managers, and business stakeholders to design the holistic solution.
- Strong technical acumen around automated testing.
- Extensive background in statistical analysis and modeling (distributions, hypothesis testing, probability theory, etc.)
- Strong hands-on experience with statistical packages and ML libraries (e.g., Python scikit learn, Spark MLlib, etc.)
- Experience in effective data exploration and visualization (e.g., Excel, Power BI, Tableau, Qlik, etc.)
- Experience in developing and debugging in one or more of the languages Java, Python.
- Ability to work in cross functional teams.
- Apply Machine Learning techniques in production including, but not limited to, neuralnets, regression, decision trees, random forests, ensembles, SVM, Bayesian models, K-Means, etc.

 

Roles and Responsibilities:

Deploying ML models into production, and scaling them to serve millions of customers.

Technical solutioning skills with deep understanding of technical API integrations, AI / Data Science, BigData and public cloud architectures / deployments in a SaaS environment.

Strong stakeholder relationship management skills - able to influence and manage the expectations of senior executives.
Strong networking skills with the ability to build and maintain strong relationships with both business, operations and technology teams internally and externally.

Provide software design and programming support to projects.

 

 Qualifications & Experience:

Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Machine Learning Architect (Deployments) or a similar role for 5-7 years.

 

Read more
upGrad
at upGrad
1 video
19 recruiters
Priyanka Muralidharan
Posted by Priyanka Muralidharan
Bengaluru (Bangalore), Mumbai
4 - 6 yrs
₹10L - ₹21L / yr
Data Science
R Programming
Python
SQL
Natural Language Processing (NLP)
+2 more

About Us

upGrad is an online education platform building the careers of tomorrow by offering the most industry-relevant programs in an immersive learning experience. Our mission is to create a new digital-first learning experience to deliver tangible career impact to individuals at scale. upGrad currently offers programs in Data Science, Machine Learning, Product Management, Digital Marketing, and Entrepreneurship, etc. upGrad is looking for people passionate about management and education to help design learning programs for working professionals to stay sharp and stay relevant and help build the careers of tomorrow.

  • upGrad was awarded the Best Tech for Education by IAMAI for 2018-19

  • upGrad was also ranked as one of the LinkedIn Top Startups 2018: The 25 most sought-

    after startups in India

  • upGrad was earlier selected as one of the top ten most innovative companies in India

    by FastCompany.

  • We were also covered by the Financial Times along with other disruptors in Ed-Tech

  • upGrad is the official education partner for Government of India - Startup India

    program

  • Our program with IIIT B has been ranked #1 program in the country in the domain of Artificial Intelligence and Machine Learning

     

    Role Summary

    Are you excited by the challenge and the opportunity of applying data-science and data- analytics techniques to the fast developing education technology domain? Do you look forward to, the sense of ownership and achievement that comes with innovating and creating data products from scratch and pushing it live into Production systems? Do you want to work with a team of highly motivated members who are on a mission to empower individuals through education?
    If this is you, come join us and become a part of the upGrad technology team. At upGrad the technology team enables all the facets of the business - whether it’s bringing efficiency to ourmarketing and sales initiatives, to enhancing our student learning experience, to empowering our content, delivery and student success teams, to aiding our student’s for their desired careeroutcomes. We play the part of bringing together data & tech to solve these business problems and opportunities at hand.
    We are looking for an highly skilled, experienced and passionate data-scientist who can come on-board and help create the next generation of data-powered education tech product. The ideal candidate would be someone who has worked in a Data Science role before wherein he/she is comfortable working with unknowns, evaluating the data and the feasibility of applying scientific techniques to business problems and products, and have a track record of developing and deploying data-science models into live applications. Someone with a strong math, stats, data-science background, comfortable handling data (structured+unstructured) as well as strong engineering know-how to implement/support such data products in Production environment.
    Ours is a highly iterative and fast-paced environment, hence being flexible, communicating well and attention-to-detail are very important too. The ideal candidate should be passionate about the customer impact and comfortable working with multiple stakeholders across the company.


    Roles & Responsibilities

      • 3+ years of experience in analytics, data science, machine learning or comparable role
      • Bachelor's degree in Computer Science, Data Science/Data Analytics, Math/Statistics or related discipline 
      • Experience in building and deploying Machine Learning models in Production systems
      • Strong analytical skills: ability to make sense out of a variety of data and its relation/applicability to the business problem or opportunity at hand
      • Strong programming skills: comfortable with Python - pandas, numpy, scipy, matplotlib; Databases - SQL and noSQL
      • Strong communication skills: ability to both formulate/understand the business problem at hand as well as ability to discuss with non data-science background stakeholders 
      • Comfortable dealing with ambiguity and competing objectives

       

      Skills Required

      • Experience in Text Analytics, Natural Language Processing

      • Advanced degree in Data Science/Data Analytics or Math/Statistics

      • Comfortable with data-visualization tools and techniques

      • Knowledge of AWS and Data Warehousing

      • Passion for building data-products for Production systems - a strong desire to impact

        the product through data-science technique

Read more
Mumbai
1 - 3 yrs
₹3.5L - ₹5L / yr
Natural Language Processing (NLP)
NLP
Keras
Scikit-Learn
Experience : 1-2 years

Location : Andheri East

Notice Period: Immediate-15 days

Responsibilities:
1. Study and transform data science prototypes.
2. Design NLP applications.
3. Select appropriate annotated datasets for Supervised Learning methods.
4. Find and implement the right algorithms and tools for NLP tasks.
5. Develop NLP systems according to requirements.
6. Train the developed model and run evaluation experiments.
7. Perform statistical analysis of results and refine models.
8. Extend ML libraries and frameworks to apply in NLP tasks.
9. Use effective text representations to transform natural language into useful features.
10. Develop APIs to deliver deciphered results, optimized for time and memory.

Requirements:
1. Proven experience as an NLP Engineer of at least one year.
2. Understanding of NLP techniques for text representation, semantic extraction techniques, data
structures and modeling.
3. Ability to effectively design software architecture.
4. Deep understanding of text representation techniques (such as n-grams, bag of words, sentiment
analysis etc), statistics and classification algorithms.
5. Hands on Experience of Knowledge of Python of more than a year.
6. Ability to write robust and testable code.
7. Experience with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn)
8. Strong communication skills.
9. An analytical mind with problem-solving abilities.
10. Bachelor Degree in Computer Science, Mathematics, Computational Linguistics.
Read more
UpX Academy
at UpX Academy
2 recruiters
Suchit Majumdar
Posted by Suchit Majumdar
Noida, Hyderabad, NCR (Delhi | Gurgaon | Noida)
2 - 6 yrs
₹4L - ₹12L / yr
Spark
Hadoop
MongoDB
Python
Scala
+3 more
Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos