Data Scientist

at Computer Power Group Pvt Ltd

DP
Posted by Sowmya M
icon
Bengaluru (Bangalore), Chennai, Pune, Mumbai
icon
7 - 13 yrs
icon
₹14L - ₹20L / yr
icon
Full time
Skills
R Programming
Python
Data Science
SQL server
Business Analysis
Scala
NumPy
Functional regression
Requirement Specifications: Job Title:: Data Scientist Experience:: 7 to 10 Years Work Location:: Mumbai, Bengaluru, Chennai Job Role:: Permanent Notice Period :: Immediate to 60 days Job description: • Support delivery of one or more data science use cases, leading on data discovery and model building activities Conceptualize and quickly build POC on new product ideas - should be willing to work as an individual contributor • Open to learn, implement newer tools\products • Experiment & identify best methods\techniques, algorithms for analytical problems • Operationalize – Work closely with the engineering, infrastructure, service management and business teams to operationalize use cases Essential Skills • Minimum 2-7 years of hands-on experience with statistical software tools: SQL, R, Python • 3+ years’ experience in business analytics, forecasting or business planning with emphasis on analytical modeling, quantitative reasoning and metrics reporting • Experience working with large data sets in order to extract business insights or build predictive models • Proficiency in one or more statistical tools/languages – Python, Scala, R, SPSS or SAS and related packages like Pandas, SciPy/Scikit-learn, NumPy etc. • Good data intuition / analysis skills; sql, plsql knowledge must • Manage and transform variety of datasets to cleanse, join, aggregate the datasets • Hands-on experience running in running various methods like Regression, Random forest, k-NN, k-Means, boosted trees, SVM, Neural Network, text mining, NLP, statistical modelling, data mining, exploratory data analysis, statistics (hypothesis testing, descriptive statistics) • Deep domain (BFSI, Manufacturing, Auto, Airlines, Supply Chain, Retail & CPG) knowledge • Demonstrated ability to work under time constraints while delivering incremental value. • Education Minimum a Masters in Statistics, or PhD in domains linked to applied statistics, applied physics, Artificial Intelligence, Computer Vision etc. BE/BTECH/BSC Statistics/BSC Maths
Read more

About Computer Power Group Pvt Ltd

Founded
2003
Type
Services
Size
employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Scientist (Kofax Accredited Developers)

at A global business process management company

Agency job
via Jobdost
Data Science
Kofax
Data Scientist
Machine Learning (ML)
Natural Language Processing (NLP)
Statistical Analysis
Appian
C#
SQL server
Webservices
icon
Pune, Bengaluru (Bangalore), Chennai, Mumbai, Gurugram, Nashik
icon
5 - 10 yrs
icon
₹20L - ₹22L / yr

B1 – Data Scientist  -  Kofax Accredited Developers

 

Requirement – 3

 

Mandatory –

  • Accreditation of Kofax KTA / KTM
  • Experience in Kofax Total Agility Development – 2-3 years minimum
  • Ability to develop and translate functional requirements to design
  • Experience in requirement gathering, analysis, development, testing, documentation, version control, SDLC, Implementation and process orchestration
  • Experience in Kofax Customization, writing Custom Workflow Agents, Custom Modules, Release Scripts
  • Application development using Kofax and KTM modules
  • Good/Advance understanding of Machine Learning /NLP/ Statistics
  • Exposure to or understanding of RPA/OCR/Cognitive Capture tools like Appian/UI Path/Automation Anywhere etc
  • Excellent communication skills and collaborative attitude
  • Work with multiple teams and stakeholders within like Analytics, RPA, Technology and Project management teams
  • Good understanding of compliance, data governance and risk control processes

Total Experience – 7-10 Years in BPO/KPO/ ITES/BFSI/Retail/Travel/Utilities/Service Industry

Good to have

  • Previous experience of working on Agile & Hybrid delivery environment
  • Knowledge of VB.Net, C#( C-Sharp ), SQL Server , Web services

 

Qualification -

  • Masters in Statistics/Mathematics/Economics/Econometrics Or BE/B-Tech, MCA or MBA 

 

Read more
Job posted by
Saida Jabbar
Data Warehouse (DWH)
Informatica
ETL
SQL Azure
Windows Azure
Python
PySpark
synapse
Azure data factory
Azure data bricks
icon
Remote only
icon
4 - 7 yrs
icon
₹7L - ₹10L / yr
We need Data Engineers. Below is the JD:
a. 4+ years of experience in Azure development using PySpark (Databricks) and Synapse.
b. Real world project experience in using ADF to bring in data from on-premise applications into Azure using ADF pipelines.
c. Strong working experience on transforming data using PySpark on Databricks.
d. Experience with Synapse database and transformations within Synapse
e. Strong knowledge of SQL.
f. Experience in working with multiple kinds of source systems (e.g. HANA, Teradata, MS SQL Server, flat files, JSON, etc.)
g. Strong communication skills.
h. Experience in working on Agile
Read more
Job posted by
Priya Sahni

Cloud Data Engineer

at Intuitive Technology Partners

Founded  •   •  employees  • 
OLTP
data ops
cloud data
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
Windows Azure
PySpark
ETL
Scala
CI/CD
Data-flow analysis
icon
Remote only
icon
9 - 20 yrs
icon
Best in industry

THE ROLE:Sr. Cloud Data Infrastructure Engineer

As a Sr. Cloud Data Infrastructure Engineer with Intuitive, you will be responsible for building or converting legacy data pipelines from legacy environments to modern cloud environments to help the analytics and data science initiatives across our enterprise customers. You will be working closely with SMEs in Data Engineering and Cloud Engineering, to create solutions and extend Intuitive's DataOps Engineering Projects and Initiatives. The Sr. Cloud Data Infrastructure Engineer will be a central critical role for establishing the DataOps/DataX data logistics and management for building data pipelines, enforcing best practices, ownership for building complex and performant Data Lake Environments, work closely with Cloud Infrastructure Architects and DevSecOps automation teams. The Sr. Cloud Data Infrastructure Engineer is the main point of contact for all things related to DataLake formation and data at scale. In this role, we expect our DataOps leaders to be obsessed with data and providing insights to help our end customers.

ROLES & RESPONSIBILITIES:

  • Design, develop, implement, and tune large-scale distributed systems and pipelines that process large volume of data; focusing on scalability, low-latency, and fault-tolerance in every system built
  • Developing scalable and re-usable frameworks for ingesting large data from multiple sources.
  • Modern Data Orchestration engineering - query tuning, performance tuning, troubleshooting, and debugging big data solutions.
  • Provides technical leadership, fosters a team environment, and provides mentorship and feedback to technical resources.
  • Deep understanding of ETL/ELT design methodologies, patterns, personas, strategy, and tactics for complex data transformations.
  • Data processing/transformation using various technologies such as spark and cloud Services.
  • Understand current data engineering pipelines using legacy SAS tools and convert to modern pipelines.

 

Data Infrastructure Engineer Strategy Objectives: End to End Strategy

Define how data is acquired, stored, processed, distributed, and consumed.
Collaboration and Shared responsibility across disciplines as partners in delivery for progressing our maturity model in the End-to-End Data practice.

  • Understanding and experience with modern cloud data orchestration and engineering for one or more of the following cloud providers - AWS, Azure, GCP.
  • Leading multiple engagements to design and develop data logistic patterns to support data solutions using data modeling techniques (such as file based, normalized or denormalized, star schemas, schema on read, Vault data model, graphs) for mixed workloads, such as OLTP, OLAP, streaming using any formats (structured, semi-structured, unstructured).
  • Applying leadership and proven experience with architecting and designing data implementation patterns and engineered solutions using native cloud capabilities that span data ingestion & integration (ingress and egress), data storage (raw & cleansed), data prep & processing, master & reference data management, data virtualization & semantic layer, data consumption & visualization.
  • Implementing cloud data solutions in the context of business applications, cost optimization, client's strategic needs and future growth goals as it relates to becoming a 'data driven' organization.
  • Applying and creating leading practices that support high availability, scalable, process and storage intensive solutions architectures to data integration/migration, analytics and insights, AI, and ML requirements.
  • Applying leadership and review to create high quality detailed documentation related to cloud data Engineering.
  • Understanding of one or more is a big plus -CI/CD, cloud devops, containers (Kubernetes/Docker, etc.), Python/PySpark/JavaScript.
  • Implementing cloud data orchestration and data integration patterns (AWS Glue, Azure Data Factory, Event Hub, Databricks, etc.), storage and processing (Redshift, Azure Synapse, BigQuery, Snowflake)
  • Possessing a certification(s) in one of the following is a big plus - AWS/Azure/GCP data engineering, and Migration.

 

 

KEY REQUIREMENTS:

  • 10+ years’ experience as data engineer.
  • Must have 5+ Years in implementing data engineering solutions with multiple cloud providers and toolsets.
  • This is hands on role building data pipelines using Cloud Native and Partner Solutions. Hands-on technical experience with Data at Scale.
  • Must have deep expertise in one of the programming languages for data processes (Python, Scala). Experience with Python, PySpark, Hadoop, Hive and/or Spark to write data pipelines and data processing layers.
  • Must have worked with multiple database technologies and patterns. Good SQL experience for writing complex SQL transformation.
  • Performance Tuning of Spark SQL running on S3/Data Lake/Delta Lake/ storage and Strong Knowledge on Databricks and Cluster Configurations.
  • Nice to have Databricks administration including security and infrastructure features of Databricks.
  • Experience with Development Tools for CI/CD, Unit and Integration testing, Automation and Orchestration
Read more
Job posted by
shalu Jain

Data Engineer

at AI-powered cloud-based SaaS solution

Agency job
via wrackle
Data engineering
Big Data
Data Engineer
Big Data Engineer
Hibernate (Java)
Data Structures
Agile/Scrum
SaaS
Cassandra
Spark
Python
NOSQL Databases
Hadoop
HDFS
MapReduce
AWS CloudFormation
EMR
Amazon S3
Apache Kafka
Apache ZooKeeper
Systems Development Life Cycle (SDLC)
Java
YARN
icon
Bengaluru (Bangalore)
icon
2 - 10 yrs
icon
₹15L - ₹50L / yr
Responsibilities

● Able contribute to the gathering of functional requirements, developing technical
specifications, and project & test planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● Roughly 80% hands-on coding
● Generate technical documentation and PowerPoint presentations to communicate
architectural and design options, and educate development teams and business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Work cross-functionally with various bidgely teams including: product management,
QA/QE, various product lines, and/or business units to drive forward results

Requirements
● BS/MS in computer science or equivalent work experience
● 2-4 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data Eco Systems.
● Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra, Kafka,
Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Strong leadership experience: Leading meetings, presenting if required
● Excellent communication skills: Demonstrated ability to explain complex technical
issues to both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
Read more
Job posted by
Naveen Taalanki

Data Engineer

at Hammoq

Founded 2020  •  Products & Services  •  20-100 employees  •  Raised funding
pandas
NumPy
Data engineering
Data Engineer
Apache Spark
PySpark
Image Processing
Scikit-Learn
Machine Learning (ML)
Python
Web Scraping
icon
Remote, Indore, Ujjain, Hyderabad, Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹5L - ₹15L / yr
  • Does analytics to extract insights from raw historical data of the organization. 
  • Generates usable training dataset for any/all MV projects with the help of Annotators, if needed.
  • Analyses user trends, and identifies their biggest bottlenecks in Hammoq Workflow.
  • Tests the short/long term impact of productized MV models on those trends.
  • Skills - Numpy, Pandas, SPARK, APACHE SPARK, PYSPARK, ETL mandatory. 
Read more
Job posted by
Nikitha Muthuswamy

Data Engineer

at Cervello

Agency job
via StackNexus
Data engineering
Data modeling
Data Warehouse (DWH)
SQL
Windows Azure
Python
data factory
data bricks
icon
Hyderabad
icon
5 - 7 yrs
icon
₹5L - ₹15L / yr
Contract Jobs - Longterm for 1 year
 
Client - Cervello
Job Role - Data Engineer
Location - Remote till covid ( Hyderabad Stacknexus office post covid)
Experience - 5 - 7 years
Skills Required - Should have hands-on experience in  Azure Data Modelling, Python, SQL and Azure Data bricks.
Notice period - Immediate to 15 days
Read more
Job posted by
suman kattella

Business Analyst

at Soroco

Founded 2013  •  Products & Services  •  100-1000 employees  •  Profitable
Business Analysis
Business Analyst
Python
R Programming
SQL
Tableau
PowerBI
MS-Excel
icon
Remote, Bengaluru (Bangalore)
icon
2 - 4 yrs
icon
₹4L - ₹12L / yr

The Company

We are a young, fast-growing AI company shaking up how work gets done across the enterprise. Every day, we help clients identify opportunities for automation, and then use a variety of AI and advanced automation techniques to rapidly model manual work in the form of code. Our impact has already been felt across some of the most reputable Fortune 500 companies, who are consequently seeing major gains in efficiency, client satisfaction, and overall savings. It’s an exciting experience to watch companies transform themselves rapidly with Soroco!

Based across US, UK, and India, our team includes several PhDs and graduates from top-notch universities such as MIT, Harvard, Carnegie Mellon, Dartmouth, and top rankers/medalists from the IITs and NITs. The senior leadership includes a former founder of a VC/hedge fund, a computer scientist from Harvard, and a former founder of a successful digital media firm. Our team has collectively published more than 100 papers in international journals and conferences and been granted over 20 patents. Our board members include some of the most well-known entrepreneurs across the globe, and our early clients include some of the most innovative Fortune 100 companies. 

 

The Role

As an individual contributor role, Business Analyst (BA) will work closely with Data Science Manager in India. BAs will be primarily responsible for analyzing improvement opportunities with business process, people productivity, application usage experience and other advanced analytics projects using Soroco scout platform collected data, for clients from diverse industry.

Responsibilities include (but are not limited to):

  • Understanding project objectives and frame analytics approach to provide the solution.
  • Take ownership in extracting, cleansing, structuring & analyzing data
  • Analyze data using statistical or rule-based techniques to identify actionable insights.
  • Prepare PowerPoint presentation/build visualization solutions for presenting the analysis & actionable insights to client.
  • Brainstorm and perform root cause analysis to provide suggestions to improve scout platform.
  • Work closely with product managers to build analytical features in the product.
  • Manage multiple projects simultaneously, in a fast-paced setting
  • Communicate effectively with client engagement, product, and engineering teams

The Candidate

An ideal BA should be passionate and entrepreneurial in nature, with a flexible attitude to learn anything and a willingness to provide the highest level of professional service.

  • 2-4 years of analytics work experience with a University degree in Engineering, preferably from Tier-1 or Tier-2 colleges.
  • Possess the skill to creatively solve analytical problems and propose solutions.
  • Ability to perform data manipulation and data modeling with complex data using SQL/Python
  • Knowledge of statistics and experience using statistical packages for analyzing datasets (R/Python)
  • Proficiency in Microsoft Office Excel and PowerPoint.
  • Impeccable attention to detail with excellent prioritization skills
  • Effective verbal, written and interpersonal communication skills.
  • Must be a team player and able to build strong working relationships with stakeholders
  • Strong capabilities and experience with programming in Python (Numpy & Pandas)

Bonus Skills:   

  • Knowledge of machine learning techniques (clustering, classification, and sequencing, among others)
  • Experience with visualization tools like Tableau, PowerBI, Qlik.

How You Will Grow:

Soroco believes in supporting you and your career. We will encourage you to grow by providing you with professional development opportunities across multiple business functions. Joining a young company will allow you to explore what is possible and have a high impact

Read more
Job posted by
Priyadarshini Rao

Senior Software Engineer (Architect), Data

at Uber

Founded 2012  •  Product  •  500-1000 employees  •  Raised funding
Big Data
Hadoop
kafka
Spark
Apache Hive
Java
Python
Scala
Apache Hadoop
Apache Impala
Apache Drill
Apache Spark
tez
presto
icon
Bengaluru (Bangalore)
icon
7 - 15 yrs
icon
₹0L / yr

Data Platform engineering at Uber is looking for a strong Technical Lead (Level 5a Engineer) who has built high quality platforms and services that can operate at scale. 5a Engineer at Uber exhibits following qualities: 

 

  • Demonstrate tech expertise Demonstrate technical skills to go very deep or broad in solving classes of problems or creating broadly leverageable solutions. 
  • Execute large scale projects Define, plan and execute complex and impactful projects. You communicate the vision to peers and stakeholders.
  • Collaborate across teams Domain resource to engineers outside your team and help them leverage the right solutions. Facilitate technical discussions and drive to a consensus.
  • Coach engineers Coach and mentor less experienced engineers and deeply invest in their learning and success. You give and solicit feedback, both positive and negative, to others you work with to help improve the entire team.
  • Tech leadership Lead the effort to define the best practices in your immediate team, and help the broader organization establish better technical or business processes.


What You’ll Do

  • Build a scalable, reliable, operable and performant data analytics platform for Uber’s engineers, data scientists, products and operations teams.
  • Work alongside the pioneers of big data systems such as Hive, Yarn, Spark, Presto, Kafka, Flink to build out a highly reliable, performant, easy to use software system for Uber’s planet scale of data. 
  • Become proficient of multi-tenancy, resource isolation, abuse prevention, self-serve debuggability aspects of a high performant, large scale, service while building these capabilities for Uber's engineers and operation folks.

 

What You’ll Need

  • 7+ years experience in building large scale products, data platforms, distributed systems in a high caliber environment.
  • Architecture: Identify and solve major architectural problems by going deep in your field or broad across different teams. Extend, improve, or, when needed, build solutions to address architectural gaps or technical debt.
  • Software Engineering/Programming: Create frameworks and abstractions that are reliable and reusable. advanced knowledge of at least one programming language, and are happy to learn more. Our core languages are Java, Python, Go, and Scala.
  • Data Engineering: Expertise in one of the big data analytics technologies we currently use such as Apache Hadoop (HDFS and YARN), Apache Hive, Impala, Drill, Spark, Tez, Presto, Calcite, Parquet, Arrow etc. Under the hood experience with similar systems such as Vertica, Apache Impala, Drill, Google Borg, Google BigQuery, Amazon EMR, Amazon RedShift, Docker, Kubernetes, Mesos etc.
  • Execution & Results: You tackle large technical projects/problems that are not clearly defined. You anticipate roadblocks and have strategies to de-risk timelines. You orchestrate work that spans multiple teams and keep your stakeholders informed.
  • A team player: You believe that you can achieve more on a team that the whole is greater than the sum of its parts. You rely on others’ candid feedback for continuous improvement.
  • Business acumen: You understand requirements beyond the written word. Whether you’re working on an API used by other developers, an internal tool consumed by our operation teams, or a feature used by millions of customers, your attention to details leads to a delightful user experience.
Read more
Job posted by
Suvidha Chib

ML & NLP Engineer

at Artivatic.ai

Founded 2017  •  Product  •  20-100 employees  •  Raised funding
Python
Machine Learning (ML)
Artificial Intelligence (AI)
Deep Learning
Natural Language Processing (NLP)
Natural Language Toolkit (NLTK)
Java
Scala
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹6L - ₹14L / yr
About Artivatic : Artivatic is a technology startup that uses AI/ML/Deep learning to build intelligent products & solutions for finance, healthcare & insurance businesses. It is based out of Bangalore with 25+ team focus on technology. The artivatic building is cutting edge solutions to enable 750 Millions plus people to get insurance, financial access, and health benefits with alternative data sources to increase their productivity, efficiency, automation power, and profitability, hence improving their way of doing business more intelligently & seamlessly.  - Artivatic offers lending underwriting, credit/insurance underwriting, fraud, prediction, personalization, recommendation, risk profiling, consumer profiling intelligence, KYC Automation & Compliance, healthcare, automated decisions, monitoring, claims processing, sentiment/psychology behaviour, auto insurance claims, travel insurance, disease prediction for insurance and more.   Job description We at artivatic are seeking for passionate, talented and research focused natural processing language engineer with strong machine learning and mathematics background to help build industry-leading technology. The ideal candidate will have research/implementation experience in modeling and developing NLP tools and have experience working with machine learning/deep learning algorithms. Roles and responsibilities Developing novel algorithms and modeling techniques to advance the state of the art in Natural Language Processing. Developing NLP based tools and solutions end to end. Working closely with R&D and Machine Learning engineers implementing algorithms that power user and developer-facing products.Be responsible for measuring and optimizing the quality of your algorithms Requirements Hands-on Experience building NLP models using different NLP libraries ad toolkit like NLTK, Stanford NLP etc Good understanding of Rule-based, Statistical and probabilistic NLP techniques. Good knowledge of NLP approaches and concepts like topic modeling, text summarization, semantic modeling, Named Entity recognition etc. Good understanding of Machine learning and Deep learning algorithms. Good knowledge of Data Structures and Algorithms. Strong programming skills in Python/Java/Scala/C/C++. Strong problem solving and logical skills. A go-getter kind of attitude with the willingness to learn new technologies. Well versed in software design paradigms and good development practices. Basic Qualifications Bachelors or Master degree in Computer Science, Mathematics or related field with specialization in natural language - Processing, Machine Learning or Deep Learning. Publication record in conferences/journals is a plus. 2+ years of working/research experience building NLP based solutions is preferred. If you feel that you are the ideal candidate & can bring a lot of values to our culture & company's vision, then please do apply. If your profile matches as per our requirements, you will hear from one of our team members. We are looking for someone who can be part of our Team not Employee. Job Perks Insurance, Travel compensation & others
Read more
Job posted by
Layak Singh

Data Scientist

at Public Vibe

Founded 2016  •  Product  •  20-100 employees  •  Profitable
Java
Data Science
Python
Natural Language Processing (NLP)
Scala
Hadoop
Spark
kafka
icon
Hyderabad
icon
1 - 3 yrs
icon
₹1L - ₹3L / yr
Hi Candidates, Greetings From Publicvibe !!! We are Hiring NLP Engineers/ Data scientists in between 0.6 to 2.5 Years of Experience for our Hyderabad location, if anyone looking out for opportunities or Job change, reach out to us. Regards, Dhaneesha Dominic.
Read more
Job posted by
Dhaneesha Dominic
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Computer Power Group Pvt Ltd?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort