Cutshort logo
Service Pack logo
Senior Machine Learning Engineer (NLP specialization)
Senior Machine Learning Engineer (NLP specialization)
Service Pack's logo

Senior Machine Learning Engineer (NLP specialization)

Alice Preetika's profile picture
Posted by Alice Preetika
3 - 6 yrs
₹12L - ₹15L / yr
Hyderabad
Skills
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
skill iconPython

What are the Key Responsibilities:

  • Design NLP applications
  • Select appropriate annotated datasets for Supervised Learning methods
  • Use effective text representations to transform natural language into useful features
  • Find and implement the right algorithms and tools for NLP tasks
  • Develop NLP systems according to requirements
  • Train the developed model and run evaluation experiments
  • Perform statistical analysis of results and refine models
  • Extend ML libraries and frameworks to apply in NLP tasks
  • Remain updated in the rapidly changing field of machine learning

 

What are we looking for:

  • Proven experience as an NLP Engineer or similar role
  • Understanding of NLP techniques for text representation, semantic extraction techniques, data structures, and modeling
  • Ability to effectively design software architecture
  • Deep understanding of text representation techniques (such as n-grams, a bag of words, sentiment analysis etc), statistics and classification algorithms
  • Knowledge of Python, Java, and R
  • Ability to write robust and testable code
  • Experience with machine learning frameworks (like Keras or PyTorch) and libraries (like sci-kit-learn)
  • Strong communication skills
  • An analytical mind with problem-solving abilities
  • Degree in Computer Science, Mathematics, Computational Linguistics, or similar field
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Service Pack

Founded :
2021
Type
Size
Stage :
Raised funding
About

Omni-Channel CX Automation Suite powered by AI. Our vision to transform Customer Experience using Artificial Intelligence.

Read more
Company social profiles
N/A

Similar jobs

Semi Stealth Mode startup in Delhi
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 6 yrs
₹35L - ₹40L / yr
skill iconData Analytics
skill iconPython
Data Visualization
SQL

A Delhi NCR based Applied AI & Consumer Tech company tackling one of the largest unsolved consumer internet problems of our time. We are a motley crew of smart, passionate and nice people who believe you can build a high performing company with a culture of respect aka a sports team with a heart aka a caring meritocracy.

Our illustrious angels include unicorn founders, serial entrepreneurs with exits, tech & consumer industry stalwarts and investment professionals/bankers.

We are hiring for our founding team (in Delhi NCR only, no remote) that will take the product from prototype to a landing! Opportunity for disproportionate non-linear impact, learning and wealth creation in a classic 0-1 with a Silicon Valley caliber founding team.


Key Responsibilities:

1.   Data Strategy and Vision:

·       Develop and drive the company's data analytics strategy, aligning it with overall business goals.

·       Define the vision for data analytics, outlining clear objectives and key results (OKRs) to measure success.

2.   Data Analysis and Interpretation:

·       Oversee the analysis of complex datasets to extract valuable insights, trends, and patterns.

·       Utilize statistical methods and data visualization techniques to present findings in a clear and compelling manner to both technical and non-technical stakeholders.

3.   Data Infrastructure and Tools:

·       Evaluate, select, and implement advanced analytics tools and platforms to enhance data processing and analysis capabilities.

·       Collaborate with IT teams to ensure a robust and scalable data infrastructure, including data storage, retrieval, and security protocols.

4.   Collaboration and Stakeholder Management:

·       Collaborate cross-functionally with teams such as marketing, sales, and product development to identify opportunities for data-driven optimizations.

·       Act as a liaison between technical and non-technical teams, ensuring effective communication of data insights and recommendations.

5.   Performance Measurement:

·       Establish key performance indicators (KPIs) and metrics to measure the impact of data analytics initiatives on business outcomes.

·       Continuously assess and improve the accuracy and relevance of analytical models and methodologies.


Qualifications:

  • Bachelor's or Master's degree in Data Science, Statistics, Computer Science, or related field.
  • Proven experience (5+ years) in data analytics, with a focus on leading analytics teams and driving strategic initiatives.
  • Proficiency in data analysis tools such as Python, R, SQL, and advanced knowledge of data visualization tools.
  • Strong understanding of statistical methods, machine learning algorithms, and predictive modelling techniques.
  • Excellent communication skills, both written and verbal, to effectively convey complex findings to diverse audie 
Read more
India's best Short Video App
Bengaluru (Bangalore)
4 - 12 yrs
₹25L - ₹50L / yr
Data engineering
Big Data
Spark
Apache Kafka
Apache Hive
+26 more
What Makes You a Great Fit for The Role?

You’re awesome at and will be responsible for
 
Extensive programming experience with cross-platform development of one of the following Java/SpringBoot, Javascript/Node.js, Express.js or Python
3-4 years of experience in big data analytics technologies like Storm, Spark/Spark streaming, Flink, AWS Kinesis, Kafka streaming, Hive, Druid, Presto, Elasticsearch, Airflow, etc.
3-4 years of experience in building high performance RPC services using different high performance paradigms: multi-threading, multi-processing, asynchronous programming (nonblocking IO), reactive programming,
3-4 years of experience working high throughput low latency databases and cache layers like MongoDB, Hbase, Cassandra, DynamoDB,, Elasticache ( Redis + Memcache )
Experience with designing and building high scale app backends and micro-services leveraging cloud native services on AWS like proxies, caches, CDNs, messaging systems, Serverless compute(e.g. lambda), monitoring and telemetry.
Strong understanding of distributed systems fundamentals around scalability, elasticity, availability, fault-tolerance.
Experience in analysing and improving the efficiency, scalability, and stability of distributed systems and backend micro services.
5-7 years of strong design/development experience in building massively large scale, high throughput low latency distributed internet systems and products.
Good experience in working with Hadoop and Big Data technologies like HDFS, Pig, Hive, Storm, HBase, Scribe, Zookeeper and NoSQL systems etc.
Agile methodologies, Sprint management, Roadmap, Mentoring, Documenting, Software architecture.
Liaison with Product Management, DevOps, QA, Client and other teams
 
Your Experience Across The Years in the Roles You’ve Played
 
Have total or more 5 - 7 years of experience with 2-3 years in a startup.
Have B.Tech or M.Tech or equivalent academic qualification from premier institute.
Experience in Product companies working on Internet-scale applications is preferred
Thoroughly aware of cloud computing infrastructure on AWS leveraging cloud native service and infrastructure services to design solutions.
Follow Cloud Native Computing Foundation leveraging mature open source projects including understanding of containerisation/Kubernetes.
 
You are passionate about learning or growing your expertise in some or all of the following
Data Pipelines
Data Warehousing
Statistics
Metrics Development
 
We Value Engineers Who Are
 
Customer-focused: We believe that doing what’s right for the creator is ultimately what will drive our business forward.
Obsessed with Quality: Your Production code just works & scales linearly
Team players. You believe that more can be achieved together. You listen to feedback and also provide supportive feedback to help others grow/improve.
Pragmatic: We do things quickly to learn what our creators desire. You know when it’s appropriate to take shortcuts that don’t sacrifice quality or maintainability.
Owners: Engineers at Chingari know how to positively impact the business.
Read more
Information Solution Provider Company
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2 - 7 yrs
₹10L - ₹15L / yr
Spark
skill iconScala
Hadoop
Big Data
Data engineering
+2 more

Responsibilities:

 

  • Designing and implementing fine-tuned production ready data/ML pipelines in Hadoop platform.
  • Driving optimization, testing and tooling to improve quality.
  • Reviewing and approving high level & amp; detailed design to ensure that the solution delivers to the business needs and aligns to the data & analytics architecture principles and roadmap.
  • Understanding business requirements and solution design to develop and implement solutions that adhere to big data architectural guidelines and address business requirements.
  • Following proper SDLC (Code review, sprint process).
  • Identifying, designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, etc.
  • Building robust and scalable data infrastructure (both batch processing and real-time) to support needs from internal and external users.
  • Understanding various data security standards and using secure data security tools to apply and adhere to the required data controls for user access in the Hadoop platform.
  • Supporting and contributing to development guidelines and standards for data ingestion.
  • Working with a data scientist and business analytics team to assist in data ingestion and data related technical issues.
  • Designing and documenting the development & deployment flow.

 

Requirements:

 

  • Experience in developing rest API services using one of the Scala frameworks.
  • Ability to troubleshoot and optimize complex queries on the Spark platform
  • Expert in building and optimizing ‘big data’ data/ML pipelines, architectures and data sets.
  • Knowledge in modelling unstructured to structured data design.
  • Experience in Big Data access and storage techniques.
  • Experience in doing cost estimation based on the design and development.
  • Excellent debugging skills for the technical stack mentioned above which even includes analyzing server logs and application logs.
  • Highly organized, self-motivated, proactive, and ability to propose best design solutions.
  • Good time management and multitasking skills to work to deadlines by working independently and as a part of a team.

 

Read more
Ganit Business Solutions
at Ganit Business Solutions
3 recruiters
Viswanath Subramanian
Posted by Viswanath Subramanian
Remote, Chennai, Bengaluru (Bangalore), Mumbai
3 - 7 yrs
₹12L - ₹25L / yr
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Computer Vision
skill iconR Programming
+5 more

Ganit has flipped the data science value chain as we do not start with a technique but for us, consumption comes first. With this philosophy, we have successfully scaled from being a small start-up to a 200 resource company with clients in the US, Singapore, Africa, UAE, and India. 

We are looking for experienced data enthusiasts who can make the data talk to them. 

 

You will: 

  • Understand business problems and translate business requirements into technical requirements. 
  • Conduct complex data analysis to ensure data quality & reliability i.e., make the data talk by extracting, preparing, and transforming it. 
  • Identify, develop and implement statistical techniques and algorithms to address business challenges and add value to the organization. 
  • Gather requirements and communicate findings in the form of a meaningful story with the stakeholders  
  • Build & implement data models using predictive modelling techniques. Interact with clients and provide support for queries and delivery adoption. 
  • Lead and mentor data analysts. 

 

We are looking for someone who has: 

 

  • Apart from your love for data and ability to code even while sleeping you would need the following. 
  • Minimum of 02 years of experience in designing and delivery of data science solutions. 
  • You should have successful projects of retail/BFSI/FMCG/Manufacturing/QSR in your kitty to show-off. 
  • Deep understanding of various statistical techniques, mathematical models, and algorithms to start the conversation with the data in hand. 
  • Ability to choose the right model for the data and translate that into a code using R, Python, VBA, SQL, etc. 
  • Bachelors/Masters degree in Engineering/Technology or MBA from Tier-1 B School or MSc. in Statistics or Mathematics 

Skillset Required:

  • Regression
  • Classification
  • Predictive Modelling
  • Prescriptive Modelling
  • Python
  • R
  • Descriptive Modelling
  • Time Series
  • Clustering
  •  

What is in it for you: 

 

  • Be a part of building the biggest brand in Data science. 
  • An opportunity to be a part of a young and energetic team with a strong pedigree. 
  • Work on awesome projects across industries and learn from the best in the industry, while growing at a hyper rate. 

 

Please Note:  

 

At Ganit, we are looking for people who love problem solving. You are encouraged to apply even if your experience does not precisely match the job description above. Your passion and skills will stand out and set you apart—especially if your career has taken some extraordinary twists and turns over the years. We welcome diverse perspectives, people who think rigorously and are not afraid to challenge assumptions in a problem. Join us and punch above your weight! 

Ganit is an equal opportunity employer and is committed to providing a work environment that is free from harassment and discrimination. 

All recruitment, selection procedures and decisions will reflect Ganit’s commitment to providing equal opportunity. All potential candidates will be assessed according to their skills, knowledge, qualifications, and capabilities. No regard will be given to factors such as age, gender, marital status, race, religion, physical impairment, or political opinions. 

Read more
Synapsica Technologies Pvt Ltd
at Synapsica Technologies Pvt Ltd
6 candid answers
1 video
Human Resources
Posted by Human Resources
Bengaluru (Bangalore)
2 - 5 yrs
₹6L - ₹20L / yr
Computer Vision
Image Processing
skill iconDeep Learning
skill iconMachine Learning (ML)

Introduction

Synapsica is a growth stage HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective, while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don’t have to rely on cryptic 2 liners given to them as diagnosis. Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by YCombinator and other investors from India, US and Japan. We are proud to have GE, AIIMS, and the Spinal Kinetics as our partners.

 

Your Roles and Responsibilities

The role involves computer vision tasks including development, customization and training of Convolutional Neural Networks (CNNs); application of ML techniques (SVM, regression, clustering etc.) and traditional Image Processing (OpenCV etc.). The role is research focused and would involve going through and implementing existing research papers, deep dive of problem analysis, generating new ideas, automating and optimizing key processes.

 

 

Requirements:

  • Strong problem-solving ability
  • Prior experience with Python, cuDNN, Tensorflow, PyTorch, Keras, Caffe (or similar Deep Learning frameworks). 
  • Extensive understanding of computer vision/image processing applications like object classification, segmentation, object detection etc 
  • Ability to write custom Convolutional Neural Network Architecture in Pytorch (or similar)
  • Experience of GPU/DSP/other Multi-core architecture programming
  • Effective communication with other project members and project stakeholders
  • Detail-oriented, eager to learn, acquire new skills
  • Prior Project Management and Team Leadership experience
  • Ability to plan work and meet deadlines
  • End to end deployment of deep learning models.
Read more
IDfy
at IDfy
6 recruiters
Stuti Srivastava
Posted by Stuti Srivastava
Mumbai
3 - 7 yrs
₹15L - ₹35L / yr
Computer Vision
Natural Language Processing (NLP)
Optical Character Recognization
OCR
skill iconMachine Learning (ML)

About IDfy

IDfy is ranked amongst the World's Top 100 Regulatory Technology companies for the last two years. IDfy's AI-powered technology solutions help real people unlock real opportunities. We create the confidence required for people and businesses to engage with each other in the digital world. If you have used any major payment wallets, digitally opened a bank account , have used a self-drive car, have played a real-money online game, or hosted people through AirBnB, it's quite likely that your identity has been verified through IDfy at some point.

 

About the team

  • The machine learning team is a closely knit team responsible for building models and services that support key workflows for IDfy.
  • Our models are critical for these workflows and as such are expected to perform accurately and with low latency. We use a mix of conventional and hand-crafted deep learning models.
  • The team comes from diverse backgrounds and experience. We respect opinions and believe in honest, open communication.
  • We work directly with business and product teams to craft solutions for our customers. We know that we are, and function as a platform and not a services company.

 

About the role

In this role you will:

  • Work on all aspects of a production machine learning platform: acquiring data, training and building models, deploying models, building API services for exposing these models, maintaining them in production, and more.
  • Work on performance tuning of models
  • From time to time work on support and debugging of these production systems
  • Work on researching the latest technology in the areas of our interest and applying it to build newer products and enhancement of the existing platform.
  • Building workflows for training and production systems
  • Contribute to documentation

 

While the emphasis will be on researching, building and deploying models into production, you will be expected to contribute to aspects mentioned above.

 

About you

You are a seasoned machine learning engineer (or data scientist). Our ideal candidate is someone with 5+ years of experience in production machine learning.

 

Must Haves

  • You should be experienced in framing and solving complex problems with the application of machine learning or deep learning models.
  • Deep expertise in computer vision or NLP with the experience of putting it into production at scale.
  • You have experienced that and understand that modelling is only a small part of building and delivering AI solutions and know what it takes to keep a high-performance system up and running.
  • Managing a large scale production ML system for at least a couple of years
  • Optimization and tuning of models for deployment at scale
  • Monitoring and debugging of production ML systems
  • An enthusiasm and drive to learn, assimilate and disseminate the state of the art research. A lot of what we are building will require innovative approaches using newly researched models and applications.
  • Past experience of mentoring junior colleagues
  • Knowledge of and experience in ML Ops and tooling for efficient machine learning processes

Good to Have

  • Our stack also includes languages like Go and Elixir. We would love it if you know any of these or take interest in functional programming.
  • We use Docker and Kubernetes for deploying our services, so an understanding of this would be useful to have.
  • Experience in using any other platform, frameworks, tools.

Other things to keep in mind

  • Our goal is to help a significant part of the world’s population unlock real opportunities. This is an opportunity to make a positive impact here, and we hope you like it as much as we do.

 

Life At IDfy

People at IDfy care about creating value. We take pride in the strong collaborative culture that we have built, and our love for solving challenging problems. Life at IDfy is not always what you’d expect at a tech start-up that’s growing exponentially every quarter. There’s still time and space for balance.

 

We host regular talks, events and performances around Life, Art, Sports, and Technology; continuously sparking creative neurons in our people to keep their intellectual juices flowing. There’s never a dull day at IDfy. The office environment is casual and it goes beyond just the dress code. We have no conventional hierarchies and believe in an open-door policy where everyone is approachable.

Read more
Gurugram, Pune, Bengaluru (Bangalore), Delhi, Noida, Ghaziabad, Faridabad
2 - 9 yrs
₹8L - ₹20L / yr
skill iconPython
Hadoop
Big Data
Spark
Data engineering
+3 more

Key Responsibilities : ( Data Developer Python, Spark)

Exp : 2 to 9 Yrs 

Development of data platforms, integration frameworks, processes, and code.

Develop and deliver APIs in Python or Scala for Business Intelligence applications build using a range of web languages

Develop comprehensive automated tests for features via end-to-end integration tests, performance tests, acceptance tests and unit tests.

Elaborate stories in a collaborative agile environment (SCRUM or Kanban)

Familiarity with cloud platforms like GCP, AWS or Azure.

Experience with large data volumes.

Familiarity with writing rest-based services.

Experience with distributed processing and systems

Experience with Hadoop / Spark toolsets

Experience with relational database management systems (RDBMS)

Experience with Data Flow development

Knowledge of Agile and associated development techniques including:

Read more
Cervello
Agency job
via StackNexus by suman kattella
Hyderabad
5 - 7 yrs
₹5L - ₹15L / yr
Data engineering
Data modeling
Data Warehouse (DWH)
SQL
Windows Azure
+3 more
Contract Jobs - Longterm for 1 year
 
Client - Cervello
Job Role - Data Engineer
Location - Remote till covid ( Hyderabad Stacknexus office post covid)
Experience - 5 - 7 years
Skills Required - Should have hands-on experience in  Azure Data Modelling, Python, SQL and Azure Data bricks.
Notice period - Immediate to 15 days
Read more
Venture Highway
at Venture Highway
3 recruiters
Nipun Gupta
Posted by Nipun Gupta
Bengaluru (Bangalore)
2 - 6 yrs
₹10L - ₹30L / yr
skill iconPython
Data engineering
Data Engineer
MySQL
skill iconMongoDB
+5 more
-Experience with Python and Data Scraping.
- Experience with relational SQL & NoSQL databases including MySQL & MongoDB.
- Familiar with the basic principles of distributed computing and data modeling.
- Experience with distributed data pipeline frameworks like Celery, Apache Airflow, etc.
- Experience with NLP and NER models is a bonus.
- Experience building reusable code and libraries for future use.
- Experience building REST APIs.

Preference for candidates working in tech product companies
Read more
Pune
5 - 8 yrs
₹10L - ₹17L / yr
skill iconPython
Big Data
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
+3 more
  • Must have 5-8 years of experience in handling data
  • Must have the ability to interpret large amounts of data and to multi-task
  • Must have strong knowledge of and experience with programming (Python), Linux/Bash scripting, databases(SQL, etc)
  • Must have strong analytical and critical thinking to resolve business problems using data and tech
  •  Must have domain familiarity and interest of – Cloud technologies (GCP/Azure Microsoft/ AWS Amazon), open-source technologies, Enterprise technologies
  • Must have the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.
  • Must have good communication skills
  • Working knowledge/exposure to ElasticSearch, PostgreSQL, Athena, PrestoDB, Jupyter Notebook
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos