Cutshort logo
Artivatic  logo
NLP Engineer - Artificial Intelligence
NLP Engineer - Artificial Intelligence
Artivatic 's logo

NLP Engineer - Artificial Intelligence

Layak Singh's profile picture
Posted by Layak Singh
2 - 5 yrs
₹5L - ₹10L / yr
Bengaluru (Bangalore)
Skills
Artificial Intelligence (AI)
Python
Natural Language Processing (NLP)
Deep Learning
Machine Learning (ML)
Java
Scala
Natural Language Toolkit (NLTK)
We at artivatic are seeking passionate, talented and research focused natural processing language engineer with strong machine learning and mathematics background to help build industry-leading technology. - The ideal candidate will have research/implementation experience modeling and developing NLP tools and experience working with machine learning/deep learning algorithms.Qualifications :- Bachelors or Master degree in Computer Science, Mathematics or related field with specialization in natural language processing, Machine Learning or Deep Learning.- Publication record in conferences/journals is a plus.- 2+ years of working/research experience building NLP based solutions is preferred.Required Skills :- Hands-on Experience building NLP models using different NLP libraries ad toolkit like NLTK, Stanford NLP etc.- Good understanding of Rule-based, Statistical and probabilistic NLP techniques.- Good knowledge of NLP approaches and concepts like topic modeling, text summarization, semantic modeling, Named Entity recognition etc.- Good understanding of Machine learning and Deep learning algorithms.- Good knowledge of Data Structures and Algorithms.- Strong programming skills in Python/Java/Scala/C/C++.- Strong problem solving and logical skills.- A go-getter kind of attitude with a willingness to learn new technologies.- Well versed with software design paradigms and good development practices.Responsibilities :- Developing novel algorithms and modeling techniques to advance the state of the art in Natural Language Processing.- Developing NLP based tools and solutions end to end.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Artivatic

Founded :
2017
Type
Size
Stage :
Raised funding
About
Artivatic empowers healthcare, insurance, broker & re-insurance businesses and developers to re-imagine insurance & health products for the next billion users. We at Artivatic are constantly building low-cost, modular API infrastructure so that businesses /hospitals can go live in a matter of days, not months. Artivatic is developing its proprietary cutting-edge solutions to enable enterprises for 1 Billion people to get access to insurance, financial and health benefits with alternative data sources to increase their productivity, efficiency, automation power, and profitability, hence improving their way of doing business more intelligently seamlessly. 
Read more
Company video
Artivatic 's video section
Artivatic 's video section
Photos
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Connect with the team
Profile picture
Shreya Roy
Profile picture
Layak Singh
Profile picture
Akanksha naini
Company social profiles
bloglinkedintwitterfacebook

Similar jobs

India's best Short Video App
Bengaluru (Bangalore)
4 - 12 yrs
₹25L - ₹50L / yr
Data engineering
Big Data
Spark
Apache Kafka
Apache Hive
+26 more
What Makes You a Great Fit for The Role?

You’re awesome at and will be responsible for
 
Extensive programming experience with cross-platform development of one of the following Java/SpringBoot, Javascript/Node.js, Express.js or Python
3-4 years of experience in big data analytics technologies like Storm, Spark/Spark streaming, Flink, AWS Kinesis, Kafka streaming, Hive, Druid, Presto, Elasticsearch, Airflow, etc.
3-4 years of experience in building high performance RPC services using different high performance paradigms: multi-threading, multi-processing, asynchronous programming (nonblocking IO), reactive programming,
3-4 years of experience working high throughput low latency databases and cache layers like MongoDB, Hbase, Cassandra, DynamoDB,, Elasticache ( Redis + Memcache )
Experience with designing and building high scale app backends and micro-services leveraging cloud native services on AWS like proxies, caches, CDNs, messaging systems, Serverless compute(e.g. lambda), monitoring and telemetry.
Strong understanding of distributed systems fundamentals around scalability, elasticity, availability, fault-tolerance.
Experience in analysing and improving the efficiency, scalability, and stability of distributed systems and backend micro services.
5-7 years of strong design/development experience in building massively large scale, high throughput low latency distributed internet systems and products.
Good experience in working with Hadoop and Big Data technologies like HDFS, Pig, Hive, Storm, HBase, Scribe, Zookeeper and NoSQL systems etc.
Agile methodologies, Sprint management, Roadmap, Mentoring, Documenting, Software architecture.
Liaison with Product Management, DevOps, QA, Client and other teams
 
Your Experience Across The Years in the Roles You’ve Played
 
Have total or more 5 - 7 years of experience with 2-3 years in a startup.
Have B.Tech or M.Tech or equivalent academic qualification from premier institute.
Experience in Product companies working on Internet-scale applications is preferred
Thoroughly aware of cloud computing infrastructure on AWS leveraging cloud native service and infrastructure services to design solutions.
Follow Cloud Native Computing Foundation leveraging mature open source projects including understanding of containerisation/Kubernetes.
 
You are passionate about learning or growing your expertise in some or all of the following
Data Pipelines
Data Warehousing
Statistics
Metrics Development
 
We Value Engineers Who Are
 
Customer-focused: We believe that doing what’s right for the creator is ultimately what will drive our business forward.
Obsessed with Quality: Your Production code just works & scales linearly
Team players. You believe that more can be achieved together. You listen to feedback and also provide supportive feedback to help others grow/improve.
Pragmatic: We do things quickly to learn what our creators desire. You know when it’s appropriate to take shortcuts that don’t sacrifice quality or maintainability.
Owners: Engineers at Chingari know how to positively impact the business.
Read more
Bengaluru (Bangalore)
8 - 15 yrs
₹25L - ₹60L / yr
Data engineering
Big Data
Spark
Apache Kafka
Cassandra
+20 more
Responsibilities

● Able to contribute to the gathering of functional requirements, developing technical
specifications, and test case planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● 60% hands-on coding with architecture ownership of one or more products
● Ability to articulate architectural and design options, and educate development teams and
business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Mentor and guide team members
● Work cross-functionally with various bidgely teams including product management, QA/QE,
various product lines, and/or business units to drive forward results

Requirements
● BS/MS in computer science or equivalent work experience
● 8-12 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data EcoSystems.
● Past experience with Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra,
Kafka, Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Ability to lead and mentor technical team members
● Expertise with the entire Software Development Life Cycle (SDLC)
● Excellent communication skills: Demonstrated ability to explain complex technical issues to
both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Business Acumen - strategic thinking & strategy development
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
● Experience with Agile Development, SCRUM, or Extreme Programming methodologies
Read more
AI powered SAAS company
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
5 - 10 yrs
₹20L - ₹40L / yr
Looker
Tableau
Data Analytics
Data Analyst
SQL
+8 more
Role and Responsibilities

  • Own the product analytics of bidgely’s end user-facing products, measure and identify areas of improvement through data
  • Liaise with Product Managers and Business Leaders to understand the product issues, priorities and hence support them through relevant product analytics
  • Own the automation of product analytics through good SQL knowledge
  • Develop early warning metrics for production and highlight issues and breakdowns for resolution
  • Resolve client escalations and concerns regarding key business metrics
  • Define and own execution
  • Own the Energy Efficiency program designs, dashboard development, and monitoring of existing Energy efficiency program
  • Deliver data-backed analysis and statistically proven solutions
  • Research and implement best practices
  • Mentor team of analysts

Qualifications and Education Requirements

  • B.Tech from a premier institute with 5+ years analytics experience or Full-time MBA from a premier b-school with 3+ years of experience in analytics/business or product analytics
  • Bachelor's degree in Business, Computer Science, Computer Information Systems, Engineering, Mathematics, or other business/analytical disciplines

Skills needed to excel

  • Proven analytical and quantitative skills and an ability to use data and metrics to back up assumptions, develop business cases, and complete root cause
    analyses
  • Excellent understanding of retention, churn, and acquisition of user base
  • Ability to employ statistics and anomaly detection techniques for data-driven
    analytics
  • Ability to put yourself in the shoes of the end customer and understand what
    “product excellence” means
  • Ability to rethink existing products and use analytics to identify new features and product improvements.
  • Ability to rethink existing processes and design new processes for more effective analyses
  • Strong SQL knowledge, working experience with Looker and Tableau a great plus
  • Strong commitment to quality visible in the thoroughness of analysis and techniques employed
  • Strong project management and leadership skills
  • Excellent communication (oral and written) and interpersonal skills and an ability to effectively communicate with both business and technical teams
  • Ability to coach and mentor analysts on technical and analytical skills
  • Good knowledge of statistics, basic machine learning, and AB Testing is
    preferable
  • Experience as a Growth hacker and/or in Product analytics is a big plus
Read more
Kloud9 Technologies
Prem Kumar
Posted by Prem Kumar
Bengaluru (Bangalore)
3 - 7 yrs
₹12L - ₹24L / yr
Machine Learning (ML)
Data Science
Python
Java
R Programming

About Kloud9:

 

Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.

 

Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.

 

At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.

 

Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.

 

We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.

 

Responsibilities:

●       Studying, transforming, and converting data science prototypes

●       Deploying models to production

●       Training and retraining models as needed

●       Analyzing the ML algorithms that could be used to solve a given problem and ranking them by their respective scores

●       Analyzing the errors of the model and designing strategies to overcome them

●       Identifying differences in data distribution that could affect model performance in real-world situations

●       Performing statistical analysis and using results to improve models

●       Supervising the data acquisition process if more data is needed

●       Defining data augmentation pipelines

●       Defining the pre-processing or feature engineering to be done on a given dataset

●       To extend and enrich existing ML frameworks and libraries

●       Understanding when the findings can be applied to business decisions

●       Documenting machine learning processes

 

Basic requirements: 

 

●       4+ years of IT experience in which at least 2+ years of relevant experience primarily in converting data science prototypes and deploying models to production

●       Proficiency with Python and machine learning libraries such as scikit-learn, matplotlib, seaborn and pandas

●       Knowledge of Big Data frameworks like Hadoop, Spark, Pig, Hive, Flume, etc

●       Experience in working with ML frameworks like TensorFlow, Keras, OpenCV

●       Strong written and verbal communications

●       Excellent interpersonal and collaboration skills.

●       Expertise in visualizing and manipulating big datasets

●       Familiarity with Linux

●       Ability to select hardware to run an ML model with the required latency

●       Robust data modelling and data architecture skills.

●       Advanced degree in Computer Science/Math/Statistics or a related discipline.

●       Advanced Math and Statistics skills (linear algebra, calculus, Bayesian statistics, mean, median, variance, etc.)

 

Nice to have

●       Familiarity with Java, and R code writing.

●       Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world

●       Verifying data quality, and/or ensuring it via data cleaning

●       Supervising the data acquisition process if more data is needed

●       Finding available datasets online that could be used for training

 

Why Explore a Career at Kloud9:

 

With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.

Read more
Archwell
Agency job
via AVI Consulting LLP by Sravanthi Puppala
Mysore
2 - 8 yrs
₹1L - ₹15L / yr
Snowflake
Python
SQL
Amazon Web Services (AWS)
Windows Azure
+6 more

Title:  Data Engineer – Snowflake

 

Location: Mysore (Hybrid model)

Exp-2-8 yrs

Type: Full Time

Walk-in date: 25th Jan 2023 @Mysore 

 

Job Role: We are looking for an experienced Snowflake developer to join our team as a Data Engineer who will work as part of a team to help design and develop data-driven solutions that deliver insights to the business. The ideal candidate is a data pipeline builder and data wrangler who enjoys building data-driven systems that drive analytical solutions and building them from the ground up. You will be responsible for building and optimizing our data as well as building automated processes for production jobs. You will support our software developers, database architects, data analysts and data scientists on data initiatives

 

Key Roles & Responsibilities:

  • Use advanced complex Snowflake/Python and SQL to extract data from source systems for ingestion into a data pipeline.
  • Design, develop and deploy scalable and efficient data pipelines.
  • Analyze and assemble large, complex datasets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements. For example: automating manual processes, optimizing data delivery, re-designing data platform infrastructure for greater scalability.
  • Build required infrastructure for optimal extraction, loading, and transformation (ELT) of data from various data sources using AWS and Snowflake leveraging Python or SQL technologies.
  • Monitor cloud-based systems and components for availability, performance, reliability, security and efficiency
  • Create and configure appropriate cloud resources to meet the needs of the end users.
  • As needed, document topology, processes, and solution architecture.
  • Share your passion for staying on top of tech trends, experimenting with and learning new technologies

 

Qualifications & Experience

Qualification & Experience Requirements:

  • Bachelor's degree in computer science, computer engineering, or a related field.
  • 2-8 years of experience working with Snowflake
  • 2+ years of experience with the AWS services.
  • Candidate should able to write the stored procedure and function in Snowflake.
  • At least 2 years’ experience in snowflake developer.
  • Strong SQL Knowledge.
  • Data injection in snowflake using Snowflake procedure.
  • ETL Experience is Must (Could be any tool)
  • Candidate should be aware of snowflake architecture.
  • Worked on the Migration project
  • DW Concept (Optional)
  • Experience with cloud data storage and compute components including lambda functions, EC2s, containers.
  • Experience with data pipeline and workflow management tools: Airflow, etc.
  • Experience cleaning, testing, and evaluating data quality from a wide variety of ingestible data sources
  • Experience working with Linux and UNIX environments.
  • Experience with profiling data, with and without data definition documentation
  • Familiar with Git
  • Familiar with issue tracking systems like JIRA (Project Management Tool) or Trello.
  • Experience working in an agile environment.

Desired Skills:

  • Experience in Snowflake. Must be willing to be Snowflake certified in the first 3 months of employment.
  • Experience with a stream-processing system: Snowpipe
  • Working knowledge of AWS or Azure
  • Experience in migrating from on-prem to cloud systems
Read more
Climate Connect Digital
at Climate Connect Digital
4 recruiters
Sarika Shukla
Posted by Sarika Shukla
Remote only
6 - 12 yrs
₹5L - ₹15L / yr
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Python
Time series
+1 more
Our team is inspired to change the world by making energy more intelligent, greener, and affordable. Established in 2011 in London, UK, and now headquartered in Gurgaon, India. We have become a leading energy-AI software player from unassuming beginnings at the vanguard of accelerating the global energy transition.

In 2020, Renew Power, India’s largest renewables developer, acquired Climate Connect. Following ReNew’s listing on NASDAQ in summer 2021, Climate Connect has become the technology anchor of a new fully independent subsidiary - Climate Connect Digital. With backing from ReNew as the anchor investor to pursue an ambitious and visionary new strategy for rapid organic and inorganic growth.

Our mission has technology at its core and involves unlocking value through intelligent software, digitalisation, and ‘horizontal integration’ across the energy ecosystem. However, computational power and machine learning in the energy sector have yet to be fully leveraged and can create massive value.

We are looking for people with knowledge of:

● Excellent verbal communications, including the ability to clearly and concisely articulate complex concepts to both technical and non-technical collaborators

● Demonstrated history of knowledge in Computer Science, Statistics, Mathematics, Software Engineering or related technical fields

● Industry experience with proven ability to apply scientific methods to solve real-world problems on large scale data

● Extensive experience with Python and SQL for software development, data analysis, and machine learning

● Experience on Libraries: TensorFlow, Keras, Numpy, sklearn, pandas, scikit-image, matplotlib, Jupyter, Statsmodels

● Experience on Time Series analysis, including EDA, Statistical inferences, ARIMA, GARCH

● Knowledge of Cluster Analysis, Classification Trees, Discriminant Analysis, Neural Networks, Deep Learning, Logistic Regression, Associations Analysis

● Hands-on experience in implementing Deep learning models with video and time series data (CNN, LSTM- s, Aotoencoder, RBM)

● Experience of Regression, Multicriteria Decision Making, Descriptive Statistics, Hypothesis Testing, Segmentation/ Classification, Predictive Analytics

● Aptitude and experience in applied statistics and machine learning techniques

● Firm grasp of visualization tools interactive and self-serving such as business intelligence and notebooks

● Experience launching production-quality machine learning models at scale e.g. dataset construction, preprocessing, deployment, monitoring, quality assurance

● Experience with math programming is an added advantage. For example: optimization, computational geometry, numerical linear algebra, etc.

What you’ll work on:

We are developing a marketing automation platform through which an electricity retailer may apply a suite of proprietary ML algorithms to optimize outcomes across a range of channels and touchpoints. We require the services of a data science professional who can design and implement various AI/ML models that optimize the performance, quality, and reliability of the product. This position offers a potential pathway to leading an entire ML expert team. These are a few things you can look forward to working on:

● Translating high-level problems and key objectives into granular model requirements.

● Defining acceptance criteria that are well structured, detailed, and comprehensive.

● Developing and testing algorithms using our price forecasts, and customers' energy portfolio.

● Collaborating with the software engineering team in deploying the developed models tailored to specific customer needs.

● Participating in the software development process, and doing the required testing, and debugging to support the deployed models.

● Taking responsibility for ensuring tracking of appropriate events/metrics, so that monitoring is timely and rigorous.

● Driving the response to the discovery of regressions or failures, by undertaking various exercises (e.g. debugging, RCA, etc.) as needed

Experience:

● 6-11 years of experience in the field of Data Sciences or Machine Learning Qualifications:

● B.E / B. Tech / M. Tech / PhD in CS/IT or Data Sciences

What’s in it for you

We offer competitive salaries based on prevailing market rates. In addition to your introductory package, you can expect to receive the following benefits:

Flexible working hours
Unlimited annual leaves
Learning and development budget
Medical insurance/Term insurance, Gratuity benefits over and above the salaries
Access to industry and domain thought leaders

At Climate Connect Digital, you get a rare opportunity to join an established company at the early stages of a significant and well-backed global growth push.

Link to apply - https://climateconnect.digital/careers/?jobId=gaG9dgeTYBvF
Read more
Angel One
at Angel One
4 recruiters
Andleeb Mujeeb
Posted by Andleeb Mujeeb
Remote only
2 - 6 yrs
₹12L - ₹18L / yr
Amazon Web Services (AWS)
PySpark
Python
Scala
Go Programming (Golang)
+19 more

Designation: Specialist - Cloud Service Developer (ABL_SS_600)

Position description:

  • The person would be primary responsible for developing solutions using AWS services. Ex: Fargate, Lambda, ECS, ALB, NLB, S3 etc.
  • Apply advanced troubleshooting techniques to provide Solutions to issues pertaining to Service Availability, Performance, and Resiliency
  • Monitor & Optimize the performance using AWS dashboards and logs
  • Partner with Engineering leaders and peers in delivering technology solutions that meet the business requirements 
  • Work with the cloud team in agile approach and develop cost optimized solutions

 

Primary Responsibilities:

  • Develop solutions using AWS services includiing Fargate, Lambda, ECS, ALB, NLB, S3 etc.

 

Reporting Team

  • Reporting Designation: Head - Big Data Engineering and Cloud Development (ABL_SS_414)
  • Reporting Department: Application Development (2487)

Required Skills:

  • AWS certification would be preferred
  • Good understanding in Monitoring (Cloudwatch, alarms, logs, custom metrics, Trust SNS configuration)
  • Good experience with Fargate, Lambda, ECS, ALB, NLB, S3, Glue, Aurora and other AWS services. 
  • Preferred to have Knowledge on Storage (S3, Life cycle management, Event configuration)
  • Good in data structure, programming in (pyspark / python / golang / Scala)
Read more
netmedscom
at netmedscom
3 recruiters
Vijay Hemnath
Posted by Vijay Hemnath
Chennai
2 - 5 yrs
₹6L - ₹25L / yr
Big Data
Hadoop
Apache Hive
Scala
Spark
+12 more

We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Roles and Responsibilities:

  • Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
  • Develop programs in Scala and Python as part of data cleaning and processing.
  • Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.  
  • Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
  • Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
  • Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Provide high operational excellence guaranteeing high availability and platform stability.
  • Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Skills:

  • Experience with Big Data pipeline, Big Data analytics, Data warehousing.
  • Experience with SQL/No-SQL, schema design and dimensional data modeling.
  • Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
  • Experience in designing systems that process structured as well as unstructured data at large scale.
  • Experience in AWS/Spark/Java/Scala/Python development.
  • Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
  • Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
  • Prior exposure to streaming data sources such as Kafka.
  • Should have knowledge on Shell Scripting and Python scripting.
  • High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
  • Experience with NoSQL databases such as Cassandra / MongoDB.
  • Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
  • Experience building and deploying applications on on-premise and cloud-based infrastructure.
  • Having a good understanding of machine learning landscape and concepts. 

 

Qualifications and Experience:

Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.

Certifications:

Good to have at least one of the Certifications listed here:

    AZ 900 - Azure Fundamentals

    DP 200, DP 201, DP 203, AZ 204 - Data Engineering

    AZ 400 - Devops Certification

Read more
MNC
at MNC
Agency job
via Fragma Data Systems by Priyanka U
Chennai
1 - 5 yrs
₹6L - ₹12L / yr
Data Science
Natural Language Processing (NLP)
Data Scientist
R Programming
Python
Skills
  • Python coding skills
  • Scikit-learn, pandas, tensorflow/keras experience
  • Machine learning: designing ml models and explaining them for regression, classification, dimensionality reduction, anomaly detection etc
  • Implementing Machine learning models and pushing it to production 
  • Creating docker images for ML models, REST API creation in Python
1) Data scientist with NLP experience
  • Additional Skills Compulsory:
    • Knowledge and professional experience of text and NLP related projects such as - text classification, text summarization, topic modeling etc
2) Data scientist with Computer vision for documents experience
  • Additional Skills Compulsory:
    • Knowledge and professional experience of vision and deep learning for documents - CNNs, Deep neural networks using tensorflow for Keras for object detection, OCR implementation, document extraction etc
Read more
Precily Private Limited
at Precily Private Limited
5 recruiters
Bharath Rao
Posted by Bharath Rao
NCR (Delhi | Gurgaon | Noida)
1 - 3 yrs
₹3L - ₹9L / yr
Data Science
Artificial Neural Network (ANN)
Artificial Intelligence (AI)
Machine Learning (ML)
Python
+3 more
-Precily AI: Automatic summarization, shortening a business document, book with our AI. Create a summary of the major points of the original document. AI can make a coherent summary taking into account variables such as length, writing style, and syntax. We're also working in the legal domain to reduce the high number of pending cases in India. We use Artificial Intelligence and Machine Learning capabilities such as NLP, Neural Networks in Processing the data to provide solutions for various industries such as Enterprise, Healthcare, Legal.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos