Cutshort logo
Statistical semantics jobs

11+ Statistical semantics Jobs in India

Apply to 11+ Statistical semantics Jobs on CutShort.io. Find your next job, effortlessly. Browse Statistical semantics Jobs and apply today!

icon
Marktine

at Marktine

1 recruiter
Vishal Sharma
Posted by Vishal Sharma
Remote, Bengaluru (Bangalore)
2 - 10 yrs
₹5L - ₹15L / yr
skill iconMachine Learning (ML)
RNN
DNN
Data modeling
Data Visualization
+2 more
What you will do?
- Understand the business drivers and analytical use-cases.
- Translate use cases to data models, descriptive, analytical, predictive, and engineering outcomes.
- Explore new technologies and learn new techniques to solve business problems creatively
- Think big! and drive the strategy for better data quality for the customers.
- Become the voice of business within engineering and of engineering within the business with customers.
- Collaborate with many teams - engineering and business, to build better data products and services
- Deliver the projects along with the team collaboratively and manage updates to customers on time

What we're looking for :
- Hands-on experience in data modeling, data visualization, and pipeline design and development
- Hands-on exposure to Machine learning concepts like supervised learning, unsupervised learning, RNN, DNN.
- Prior experience working with business stakeholders, in an enterprise space is a plus
- Great communication skills. You should be able to directly communicate with senior business leaders, embed yourself with business teams, and present solutions to business stakeholders
- Experience in working independently and driving projects end to end, strong analytical skills.
Read more
Bengaluru (Bangalore)
6 - 15 yrs
₹40L - ₹90L / yr
skill iconData Science
skill iconDeep Learning
Data Scientist
skill iconMachine Learning (ML)
Artificial Neural Network (ANN)
+9 more

Responsibilities

  • Building out and manage a young data science vertical within the organization

  • Provide technical leadership in the areas of machine learning, analytics, and data sciences

  • Work with the team and create a roadmap to solve the company’s requirements by solving data-mining, analytics, and ML problems by Identifying business problems that could be solved using Data Science and scoping it out end to end.

  • Solve business problems by applying advanced Machine Learning algorithms and complex statistical models on large volumes of data.

  • Develop heuristics, algorithms, and models to deanonymize entities on public blockchains

  • Data Mining - Extend the organization’s proprietary dataset by introducing new data collection methods and by identifying new data sources.

  • Keep track of the latest trends in cryptocurrency usage on open-web and dark-web and develop counter-measures to defeat concealment techniques used by criminal actors.

  • Develop in-house algorithms to generate risk scores for blockchain transactions.

  • Work with data engineers to implement the results of your work.

  • Assemble large, complex data sets that meet functional / non-functional business requirements.

  • Build, scale and deploy holistic data science products after successful prototyping.

  • Clearly articulate and present recommendations to business partners, and influence future plans based on insights.

 

Preferred Experience

 

  • >8+ years of relevant experience as a Data Scientist or Analyst. A few years of work experience solving NLP problems or other ML problems is a plus

  • Must have previously managed a team of at least 5 data scientists or analysts or demonstrate that they have prior experience in scaling a data science function from the ground 

  • Good understanding of python, bash scripting, and basic cloud platform skills (on GCP or AWS)

  • Excellent communication skills and analytical skills

What you’ll get

  • Work closely with the Founders in helping grow the organization to the next level alongside some of the best and brightest talents around you

  • An excellent culture, we encourage collaboration, growth, and learning amongst the team

  • Competitive salary and equity

  • An autonomous and flexible role where you will be trusted with key tasks.

  • An opportunity to have a real impact and be part of a company with purpose.

Read more
Personal Care Product Manufacturing
Mumbai
3 - 8 yrs
₹12L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+9 more

DATA ENGINEER


Overview

They started with a singular belief - what is beautiful cannot and should not be defined in marketing meetings. It's defined by the regular people like us, our sisters, our next-door neighbours, and the friends we make on the playground and in lecture halls. That's why we stand for people-proving everything we do. From the inception of a product idea to testing the final formulations before launch, our consumers are a part of each and every process. They guide and inspire us by sharing their stories with us. They tell us not only about the product they need and the skincare issues they face but also the tales of their struggles, dreams and triumphs. Skincare goes deeper than skin. It's a form of self-care for many. Wherever someone is on this journey, we want to cheer them on through the products we make, the content we create and the conversations we have. What we wish to build is more than a brand. We want to build a community that grows and glows together - cheering each other on, sharing knowledge, and ensuring people always have access to skincare that really works.

 

Job Description:

We are seeking a skilled and motivated Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, developing, and maintaining the data infrastructure and systems that enable efficient data collection, storage, processing, and analysis. You will collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to implement data pipelines and ensure the availability, reliability, and scalability of our data platform.


Responsibilities:

Design and implement scalable and robust data pipelines to collect, process, and store data from various sources.

Develop and maintain data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation.

Optimize and tune the performance of data systems to ensure efficient data processing and analysis.

Collaborate with data scientists and analysts to understand data requirements and implement solutions for data modeling and analysis.

Identify and resolve data quality issues, ensuring data accuracy, consistency, and completeness.

Implement and maintain data governance and security measures to protect sensitive data.

Monitor and troubleshoot data infrastructure, perform root cause analysis, and implement necessary fixes.

Stay up-to-date with emerging technologies and industry trends in data engineering and recommend their adoption when appropriate.


Qualifications:

Bachelor’s or higher degree in Computer Science, Information Systems, or a related field.

Proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems.

Strong programming skills in languages such as Python, Java, or Scala.

Experience with big data technologies and frameworks like Hadoop, Spark, or Kafka.

Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle).

Familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery).

Solid understanding of data modeling, data warehousing, and ETL principles.

Knowledge of data integration techniques and tools (e.g., Apache Nifi, Talend, or Informatica).

Strong problem-solving and analytical skills, with the ability to handle complex data challenges.

Excellent communication and collaboration skills to work effectively in a team environment.


Preferred Qualifications:

Advanced knowledge of distributed computing and parallel processing.

Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink).

Familiarity with machine learning concepts and frameworks (e.g., TensorFlow, PyTorch).

Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes).

Experience with data visualization and reporting tools (e.g., Tableau, Power BI).

Certification in relevant technologies or data engineering disciplines.



Read more
fintech startup
Agency job
via Qrata by Rayal Rajan
Pune
4 - 12 yrs
₹15L - ₹45L / yr
skill iconPython
Linear regression
Logistic regression
skill iconMachine Learning (ML)
Algorithms

The role is with a Fintech Credit Card company based in Pune within the Decision Science team. (OneCard )


About


Credit cards haven't changed much for over half a century so our team of seasoned bankers, technologists, and designers set out to redefine the credit card for you - the consumer. The result is OneCard - a credit card reimagined for the mobile generation. OneCard is India's best metal credit card built with full-stack tech. It is backed by the principles of simplicity, transparency, and giving back control to the user.



The Engineering Challenge


“Re-imaging credit and payments from First Principles”


Payments is an interesting engineering challenge in itself with requirements of low latency, transactional guarantees, security, and high scalability. When we add credit and engagement into the mix, the challenge becomes even more interesting with underwriting and recommendation algorithms working on large data sets. We have eliminated the current call center, sales agent, and SMS-based processes with a mobile app that puts the customers in complete control. To stay agile, the entire stack is built on the cloud with modern technologies.


Purpose of Role :


- Develop and implement the collection analytics and strategy function for the credit cards. Use analysis and customer insights to develop optimum strategy.


CANDIDATE PROFILE :


- Successful candidates will have in-depth knowledge of statistical modelling/data analysis tools (Python, R etc.), techniques. They will be an adept communicator with good interpersonal skills to work with senior stake holders in India to grow revenue primarily through identifying / delivering / creating new, profitable analytics solutions.


We are looking for someone who:


- Proven track record in collection and risk analytics preferably in Indian BFSI industry. This is a must.


- Identify & deliver appropriate analytics solutions


- Experienced in Analytics team management



Essential Duties and Responsibilities :


- Responsible for delivering high quality analytical and value added services


- Responsible for automating insights and proactive actions on them to mitigate collection Risk.


- Work closely with the internal team members to deliver the solution


- Engage Business/Technical Consultants and delivery teams appropriately so that there is a shared understanding and agreement as to deliver proposed solution


- Use analysis and customer insights to develop value propositions for customers


- Maintain and enhance the suite of suitable analytics products.


- Actively seek to share knowledge within the team


- Share findings with peers from other teams and management where required


- Actively contribute to setting best practice processes.


Knowledge, Experience and Qualifications :


Knowledge :


- Good understanding of collection analytics preferably in Retail lending industry.


- Knowledge of statistical modelling/data analysis tools (Python, R etc.), techniques and market trends


- Knowledge of different modelling frameworks like Linear Regression, Logistic Regression, Multiple Regression, LOGIT, PROBIT, time- series modelling, CHAID, CART etc.


- Knowledge of Machine learning & AI algorithms such as Gradient Boost, KNN, etc.


- Understanding of decisioning and portfolio management in banking and financial services would be added advantage


- Understanding of credit bureau would be an added advantage


Experience :


- 4 to 8 years of work experience in core analytics function of a large bank / consulting firm.


- Experience on working on Collection analytics is must


- Experience on handling large data volumes using data analysis tools and generating good data insights


- Demonstrated ability to communicate ideas and analysis results effectively both verbally and in writing to technical and non-technical audiences


- Excellent communication, presentation and writing skills Strong interpersonal skills


- Motivated to meet and exceed stretch targets


- Ability to make the right judgments in the face of complexity and uncertainty


- Excellent relationship and networking skills across our different business and geographies


Qualifications :


- Masters degree in Statistics, Mathematics, Economics, Business Management or Engineering from a reputed college

Read more
ZF india
Sagar Sthawarmath
Posted by Sagar Sthawarmath
Hyderabad, Chennai
4 - 9 yrs
₹3L - ₹15L / yr
SAS
skill iconData Analytics
Data Visualization
Data integration
Data Warehouse (DWH)

In this role, you will: 

As part of a team focused on the preserving the customer experience across the organization, this Analytic Consultant will be responsible for: 

    • Understand business objectives and provide credible challenge to analysis requirements. 
    • Verify sound analysis practices and data decisions were leveraged throughout planning and data sourcing phases. 
    • Conduct in-depth research within complex data environments to identify data integrity issues and propose solutions to improve analysis accuracy. 
    • Applying critical evaluation to challenge assumptions, formulate a defendable hypothesis, and ensuring high quality analysis results. 
    • Ensure adherence to data management/ data governance regulations and policies. 
    • Performing and testing highly complex data analytics for customer remediation. 
    • Designing analysis projects flow and documentation that is structured for consistency, easy to understand, and to be offered to multiple levels of reviewers, partners, and regulatory agents demonstrating research and analysis completed. 
    • Investigate and ensure data integrity from multiple sources. 
    • Ensure data recommended and used is the best “source of truth”. 
    • Applies knowledge of business, customers, and products to synthesize data to 'form a story' and align information to contrast/compare to industry perspective. Data involved typically very large, structured or unstructured, and from multiple sources. 
    • Must have a strong attention to detail and be able to meet high quality standards consistently. 
    • Other duties as assigned by manager. 
    • Willing to assist on high priority work outside of regular business hours or weekend as needed. 


Essential Qualifications: 
 

    • Around 5+ years in similar analytics roles 
    • Bachelors, M.A./M.Sc. College Degree or Higher in applied mathematics, statistics, engineering, physics, accounting, finance, economics, econometrics, computer sciences, or business/social and behavioral sciences with a quantitative emphasis. 
    • Preferred programming knowledge SQL/SAS  
    • Knowledge of PVSI, Non-Lending, Student Loans, Small Business and Personal Lines and Loans is a plus. 
    • Strong experience with data integration, database structures and data warehouses. 
    • Persuasive written and verbal communication skills. 


Desired Qualifications: 

    • Certifications in Data Science, or BI Reporting tools. 
    • Ability to prioritize work, meet deadlines, achieve goals and work under pressure in a dynamic and complex environment – Soft Skills. 
    • Detail oriented, results driven, and has the ability to navigate in a quickly changing and high demand environment while balancing multiple priorities. 
    • Ability to research and report on a variety of issues using problem solving skills. 
    • Ability to act with integrity and a high level of professionalism with all levels of team members and management. 
    • Ability to make timely and independent judgment decisions while working in a fast-paced and results-driven environment. 
    • Ability to learn the business aspects quickly, multitask and prioritize between projects. 
    • Exhibits appropriate sense of urgency in managing responsibilities. 
    • Ability to accurately process high volumes of work within established deadlines. 
    • Available to flex schedule periodically based on business need. 
    • Demonstrate strong negotiation, communication & presentation skills. 
    • Demonstrates a high degree of reliability, integrity and trustworthiness. 
    • Takes ownership of assignments and helps drive assignments of the team. 
    • Dedicated, enthusiastic, driven and performance-oriented; possesses a strong work ethic and good team player. 
    • Be proactive and get engaged in organizational initiatives.
Read more
KARZA
Agency job
via Seven N Half by Viral Jain
Remote only
2 - 5 yrs
₹6L - ₹19L / yr
RNN
skill iconDeep Learning
skill iconMachine Learning (ML)
Sentiment Analysis
LSTM
+2 more
Identify and integrate new datasets that can be leveraged through our product capabilities and work closely
with the engineering team to strategize and execute the development of data products
● Execute analytical experiments methodically to help solve various problems and make a true impact across
various domains and industries
NLP ENGINEER at KARZA TECHNOLOGIES
● Identify relevant data sources and sets to mine for client business needs, and collect large structured and
unstructured datasets and variables
● Devise and utilize algorithms and models to mine big data stores, perform data and error analysis to improve
models, and clean and validate data for uniformity and accuracy
● Analyze data for trends and patterns, and Interpret data with a clear objective in mind
● Implement analytical models into production by collaborating with software developers and machine
learning engineers
● Communicate analytic solutions to stakeholders and implement improvements as needed to operational
systems
What you need to work with us:
● Good understanding of data structures, algorithms, and the first principles of mathematics.
● Proficient in Python and using packages like NLTK, Numpy, Pandas
● Should have worked on deep learning frameworks (like Tensorflow, Keras, PyTorch, etc)
● Hands-on experience in Natural Language Processing, Sequence, and RNN Based models
● Mathematical intuition of ML and DL algorithms
● Should be able to perform thorough model evaluation by creating hypotheses on the basis of statistical
analyses
● Should be comfortable in going through open-source code and reading research papers.
● Should be curious or thoughtful enough to answer the “WHYs” pertaining to the most cherished
observations, thumb rules, and ideas across the data science community.
Qualification and Experience Required:
● 1 - 4 years of relevant experience
● Bachelor/ Master’s degree in computer science / Computer Engineering / Information Technology
Read more
Wellness Forever Medicare Private Limited
Mumbai
3 - 5 yrs
₹7L - ₹11L / yr
Data Warehouse (DWH)
Informatica
ETL
SQL server
Microsoft Windows Azure
+4 more
  • Minimum 3-4 years of experience with ETL tools, SQL, SSAS & SSIS
  • Good understanding of Data Governance, including Master Data Management (MDM) and Data Quality tools and processes
  • Knowledge of programming languages eg. JASON, Python, R
  • Hands on experience of SQL database design
  • Experience working with REST API
  • Influencing and supporting project delivery through involvement in project/sprint planning and QA
  • Working experience with Azure
  • Stakeholder management
  • Good communication skills
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Lokesh Manikappa
Posted by Lokesh Manikappa
Bengaluru (Bangalore)
5 - 12 yrs
₹15L - ₹35L / yr
ETL
Informatica
Data Warehouse (DWH)
Data modeling
Spark
+5 more

Job Description

The applicant must have a minimum of 5 years of hands-on IT experience, working on a full software lifecycle in Agile mode.

Good to have experience in data modeling and/or systems architecture.
Responsibilities will include technical analysis, design, development and perform enhancements.

You will participate in all/most of the following activities:
- Working with business analysts and other project leads to understand requirements.
- Modeling and implementing database schemas in DB2 UDB or other relational databases.
- Designing, developing, maintaining and Data processing using Python, DB2, Greenplum, Autosys and other technologies

 

Skills /Expertise Required :

Work experience in developing large volume database (DB2/Greenplum/Oracle/Sybase).

Good experience in writing stored procedures, integration of database processing, tuning and optimizing database queries.

Strong knowledge of table partitions, high-performance loading and data processing.
Good to have hands-on experience working with Perl or Python.
Hands on development using Spark / KDB / Greenplum platform will be a strong plus.
Designing, developing, maintaining and supporting Data Extract, Transform and Load (ETL) software using Informatica, Shell Scripts, DB2 UDB and Autosys.
Coming up with system architecture/re-design proposals for greater efficiency and ease of maintenance and developing software to turn proposals into implementations.

Need to work with business analysts and other project leads to understand requirements.
Strong collaboration and communication skills

Read more
Pinghala

at Pinghala

1 recruiter
Ashwini Dhaipule
Posted by Ashwini Dhaipule
Pune
3 - 5 yrs
₹6L - ₹10L / yr
PowerBI
Data Visualization
Data architecture
Informatica PowerCenter
SQL
+5 more

Pingahla is recruiting business intelligence Consultants/Senior consultants who can help us with Information Management projects (domestic, onshore and offshore) as developers and team leads. The candidates are expected to have 3-6 years of experience with Informatica Power Center/Talend DI/Informatica Cloud and must be very proficient with Business Intelligence in general. The job is based out of our Pune office.

Responsibilities:

  • Manage the customer relationship by serving as the single point of contact before, during and after engagements.
  • Architect data management solutions.
  • Provide technical leadership to other consultants and/or customer/partner resources.
  • Design, develop, test and deploy data integration solutions in accordance with customer’s schedule.
  • Supervise and mentor all intermediate and junior level team members.
  • Provide regular reports to communicate status both internally and externally.
  • Qualifications:
  • A typical profile that would suit this position would be if the following background:
  • A graduate from a reputed engineering college 
  • An excellent I.Q and analytical skills and should be able to grasp new concepts and learn new technologies.
  • A willingness to work with a small team in a fast-growing environment.
  • A good knowledge of Business Intelligence concepts

 

Mandatory Requirements:

  • Knowledge of Business Intelligence
  • Good knowledge of at least one of the following data integration tools - Informatica Powercenter, Talend DI, Informatica Cloud
  • Knowledge of SQL
  • Excellent English and communication skills
  • Intelligent, quick to learn new technologies
  • Track record of accomplishment and effectiveness with handling customers and managing complex data management needs
     

 

Read more
Bengaluru (Bangalore)
1 - 2 yrs
₹15L - ₹17L / yr
skill iconMachine Learning (ML)
skill iconData Science
Data Scientist
skill iconPython
pandas
+4 more
  • Use data to develop machine learning models that optimize decision making in Credit Risk, Fraud, Marketing, and Operations
  • Implement data pipelines, new features, and algorithms that are critical to our production models
  • Create scalable strategies to deploy and execute your models
  • Write well designed, testable, efficient code
  • Identify valuable data sources and automate collection processes.
  • Undertake to preprocess of structured and unstructured data.
  • Analyze large amounts of information to discover trends and patterns.

Requirements:

  • 1+ years of experience in applied data science or engineering with a focus on machine learning
  • Python expertise with good knowledge of machine learning libraries, tools, techniques, and frameworks (e.g. pandas, sklearn, xgboost, lightgbm, logistic regression, random forest classifier, gradient boosting regressor etc)
  • strong quantitative and programming skills with a product-driven sensibility

 

Read more
GitHub

at GitHub

4 recruiters
Nataliia Mediana
Posted by Nataliia Mediana
Remote only
3 - 8 yrs
$24K - $60K / yr
ETL
PySpark
Data engineering
Data engineer
athena
+9 more
We are a nascent quant hedge fund; we need to stage financial data and make it easy to run and re-run various preprocessing and ML jobs on the data.
- We are looking for an experienced data engineer to join our team.
- The preprocessing involves ETL tasks, using pyspark, AWS Glue, staging data in parquet formats on S3, and Athena

To succeed in this data engineering position, you should care about well-documented, testable code and data integrity. We have devops who can help with AWS permissions.
We would like to build up a consistent data lake with staged, ready-to-use data, and to build up various scripts that will serve as blueprints for various additional data ingestion and transforms.

If you enjoy setting up something which many others will rely on, and have the relevant ETL expertise, we’d like to work with you.

Responsibilities
- Analyze and organize raw data
- Build data pipelines
- Prepare data for predictive modeling
- Explore ways to enhance data quality and reliability
- Potentially, collaborate with data scientists to support various experiments

Requirements
- Previous experience as a data engineer with the above technologies
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort