Cutshort logo
BERT Jobs in Delhi, NCR and Gurgaon

11+ BERT Jobs in Delhi, NCR and Gurgaon | BERT Job openings in Delhi, NCR and Gurgaon

Apply to 11+ BERT Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest BERT Job opportunities across top companies like Google, Amazon & Adobe.

icon
Fintech lead,
Agency job
via The Hub by Sridevi Viswanathan
Gurugram, Noida
3 - 8 yrs
₹5L - ₹15L / yr
Natural Language Processing (NLP)
BERT
skill iconMachine Learning (ML)
skill iconData Science
skill iconPython
+1 more

Who we are looking for

· A Natural Language Processing (NLP) expert with strong computer science fundamentals and experience in working with deep learning frameworks. You will be working at the cutting edge of NLP and Machine Learning.

Roles and Responsibilities

· Work as part of a distributed team to research, build and deploy Machine Learning models for NLP.

· Mentor and coach other team members

· Evaluate the performance of NLP models and ideate on how they can be improved

· Support internal and external NLP-facing APIs

· Keep up to date on current research around NLP, Machine Learning and Deep Learning

Mandatory Requirements

·       Any graduation with at least 2 years of demonstrated experience as a Data Scientist.

Behavioural Skills

· Strong analytical and problem-solving capabilities.

· Proven ability to multi-task and deliver results within tight time frames

· Must have strong verbal and written communication skills

· Strong listening skills and eagerness to learn

· Strong attention to detail and the ability to work efficiently in a team as well as individually

Technical Skills

Hands-on experience with

· NLP

· Deep Learning

· Machine Learning

· Python

· Bert

Preferred Requirements

· Experience in Computer Vision is preferred

Role: Data Scientist

Industry Type: Banking

Department: Data Science & Analytics

Employment Type: Full Time, Permanent

Role Category: Data Science & Machine Learning

Read more
Tier #1 MNC
Chennai, Bengaluru (Bangalore), Pune, Hyderabad, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Mumbai, Coimbatore, Kochi (Cochin)
4 - 10 yrs
Best in industry
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
skill iconDeep Learning

Job Description:

 

1.Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges

a. Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem

b. Improve Model accuracy to deliver greater business impact

c.Estimate business impact due to deployment of model


2.Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge


3.Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines


4.Design , develop & deploy Deep learning models using Tensorflow / Pytorch 


5.Experience in using Deep learning models with text, speech, image and video data

a.Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc

b.Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV

c.Knowledge of State of the art Deep learning algorithms


6.Optimize and tune Deep Learnings model for best possible accuracy


7.Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau


8.Work with application teams, in deploying models on cloud as a service or on-prem

a.Deployment of models in Test / Control framework for tracking

b.Build CI/CD pipelines for ML model deployment


9.Integrating AI&ML models with other applications using REST APIs and other connector technologies


10.Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact.

 

 

· Technology/Subject Matter Expertise

  • Sufficient expertise in machine learning, mathematical and statistical sciences
  • Use of versioning & Collaborative tools like Git / Github
  • Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming
  • Develop prototype level ideas into a solution that can scale to industrial grade strength
  • Ability to quantify & estimate the impact of ML models.

 

· Softskills Profile

  • Curiosity to think in fresh and unique ways with the intent of breaking new ground.
  • Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control
  • Ability to think ahead, and anticipate the needs for solving the problem will be important

·        Ability to communicate key messages effectively, and articulate strong opinions in large forums

 

 

· Desirable Experience:

  • Keen contributor to open source communities, and communities like Kaggle
  • Ability to process Huge amount of Data using Pyspark/Hadoop
  • Development & Application of Reinforcement Learning
  • Knowledge of Optimization/Genetic Algorithms
  • Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios
  • Optimize and tune deep learning model for best possible accuracy
  • Understanding of stream data processing, RPA, edge computing, AR/VR etc
  • Appreciation of digital ethics, data privacy will be important
  • Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus
  • Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus
Read more
A fast growing Big Data company
Noida, Bengaluru (Bangalore), Chennai, Hyderabad
6 - 8 yrs
₹10L - ₹15L / yr
AWS Glue
SQL
skill iconPython
PySpark
Data engineering
+6 more

AWS Glue Developer 

Work Experience: 6 to 8 Years

Work Location:  Noida, Bangalore, Chennai & Hyderabad

Must Have Skills: AWS Glue, DMS, SQL, Python, PySpark, Data integrations and Data Ops, 

Job Reference ID:BT/F21/IND


Job Description:

Design, build and configure applications to meet business process and application requirements.


Responsibilities:

7 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.


Technical Experience:

Hands-on experience on developing Data platform and its components Data Lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies.


➢ Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.

➢ Create data pipeline architecture by designing and implementing data ingestion solutions.

➢ Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow.

➢ Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena.

➢ Author ETL processes using Python, Pyspark.

➢ Build Redshift Spectrum direct transformations and data modelling using data in S3.

➢ ETL process monitoring using CloudWatch events.

➢ You will be working in collaboration with other teams. Good communication must.

➢ Must have experience in using AWS services API, AWS CLI and SDK


Professional Attributes:

➢ Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology.

➢ Must have 6+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment.

➢ Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired.


Qualification:

➢ Degree in Computer Science, Computer Engineering or equivalent.


Salary: Commensurate with experience and demonstrated competence

Read more
Information Solution Provider Company
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2 - 7 yrs
₹10L - ₹15L / yr
Spark
skill iconScala
Hadoop
Big Data
Data engineering
+2 more

Responsibilities:

 

  • Designing and implementing fine-tuned production ready data/ML pipelines in Hadoop platform.
  • Driving optimization, testing and tooling to improve quality.
  • Reviewing and approving high level & amp; detailed design to ensure that the solution delivers to the business needs and aligns to the data & analytics architecture principles and roadmap.
  • Understanding business requirements and solution design to develop and implement solutions that adhere to big data architectural guidelines and address business requirements.
  • Following proper SDLC (Code review, sprint process).
  • Identifying, designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, etc.
  • Building robust and scalable data infrastructure (both batch processing and real-time) to support needs from internal and external users.
  • Understanding various data security standards and using secure data security tools to apply and adhere to the required data controls for user access in the Hadoop platform.
  • Supporting and contributing to development guidelines and standards for data ingestion.
  • Working with a data scientist and business analytics team to assist in data ingestion and data related technical issues.
  • Designing and documenting the development & deployment flow.

 

Requirements:

 

  • Experience in developing rest API services using one of the Scala frameworks.
  • Ability to troubleshoot and optimize complex queries on the Spark platform
  • Expert in building and optimizing ‘big data’ data/ML pipelines, architectures and data sets.
  • Knowledge in modelling unstructured to structured data design.
  • Experience in Big Data access and storage techniques.
  • Experience in doing cost estimation based on the design and development.
  • Excellent debugging skills for the technical stack mentioned above which even includes analyzing server logs and application logs.
  • Highly organized, self-motivated, proactive, and ability to propose best design solutions.
  • Good time management and multitasking skills to work to deadlines by working independently and as a part of a team.

 

Read more
Accolite Digital
Nitesh Parab
Posted by Nitesh Parab
Bengaluru (Bangalore), Hyderabad, Gurugram, Delhi, Noida, Ghaziabad, Faridabad
4 - 8 yrs
₹5L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
SSIS
SQL Server Integration Services (SSIS)
+10 more

Job Title: Data Engineer

Job Summary: As a Data Engineer, you will be responsible for designing, building, and maintaining the infrastructure and tools necessary for data collection, storage, processing, and analysis. You will work closely with data scientists and analysts to ensure that data is available, accessible, and in a format that can be easily consumed for business insights.

Responsibilities:

  • Design, build, and maintain data pipelines to collect, store, and process data from various sources.
  • Create and manage data warehousing and data lake solutions.
  • Develop and maintain data processing and data integration tools.
  • Collaborate with data scientists and analysts to design and implement data models and algorithms for data analysis.
  • Optimize and scale existing data infrastructure to ensure it meets the needs of the business.
  • Ensure data quality and integrity across all data sources.
  • Develop and implement best practices for data governance, security, and privacy.
  • Monitor data pipeline performance / Errors and troubleshoot issues as needed.
  • Stay up-to-date with emerging data technologies and best practices.

Requirements:

Bachelor's degree in Computer Science, Information Systems, or a related field.

Experience with ETL tools like Matillion,SSIS,Informatica

Experience with SQL and relational databases such as SQL server, MySQL, PostgreSQL, or Oracle.

Experience in writing complex SQL queries

Strong programming skills in languages such as Python, Java, or Scala.

Experience with data modeling, data warehousing, and data integration.

Strong problem-solving skills and ability to work independently.

Excellent communication and collaboration skills.

Familiarity with big data technologies such as Hadoop, Spark, or Kafka.

Familiarity with data warehouse/Data lake technologies like Snowflake or Databricks

Familiarity with cloud computing platforms such as AWS, Azure, or GCP.

Familiarity with Reporting tools

Teamwork/ growth contribution

  • Helping the team in taking the Interviews and identifying right candidates
  • Adhering to timelines
  • Intime status communication and upfront communication of any risks
  • Tech, train, share knowledge with peers.
  • Good Communication skills
  • Proven abilities to take initiative and be innovative
  • Analytical mind with a problem-solving aptitude

Good to have :

Master's degree in Computer Science, Information Systems, or a related field.

Experience with NoSQL databases such as MongoDB or Cassandra.

Familiarity with data visualization and business intelligence tools such as Tableau or Power BI.

Knowledge of machine learning and statistical modeling techniques.

If you are passionate about data and want to work with a dynamic team of data scientists and analysts, we encourage you to apply for this position.

Read more
Quess Corp Limited

at Quess Corp Limited

6 recruiters
Anjali Singh
Posted by Anjali Singh
Noida, Delhi, Gurugram, Ghaziabad, Faridabad, Bengaluru (Bangalore), Chennai
5 - 8 yrs
₹1L - ₹15L / yr
Google Cloud Platform (GCP)
skill iconPython
Big Data
Data processing
Data Visualization

GCP  Data Analyst profile must have below skills sets :

 

Read more
Infogain
Agency job
via Technogen India PvtLtd by RAHUL BATTA
Bengaluru (Bangalore), Pune, Noida, NCR (Delhi | Gurgaon | Noida)
7 - 10 yrs
₹20L - ₹25L / yr
Data engineering
skill iconPython
SQL
Spark
PySpark
+10 more
  1. Sr. Data Engineer:

 Core Skills – Data Engineering, Big Data, Pyspark, Spark SQL and Python

Candidate with prior Palantir Cloud Foundry OR Clinical Trial Data Model background is preferred

Major accountabilities:

  • Responsible for Data Engineering, Foundry Data Pipeline Creation, Foundry Analysis & Reporting, Slate Application development, re-usable code development & management and Integrating Internal or External System with Foundry for data ingestion with high quality.
  • Have good understanding on Foundry Platform landscape and it’s capabilities
  • Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.
  • Defines company data assets (data models), Pyspark, spark SQL, jobs to populate data models.
  • Designs data integrations and data quality framework.
  • Design & Implement integration with Internal, External Systems, F1 AWS platform using Foundry Data Connector or Magritte Agent
  • Collaboration with data scientists, data analyst and technology teams to document and leverage their understanding of the Foundry integration with different data sources - Actively participate in agile work practices
  • Coordinating with Quality Engineer to ensure the all quality controls, naming convention & best practices have been followed

Desired Candidate Profile :

  • Strong data engineering background
  • Experience with Clinical Data Model is preferred
  • Experience in
    • SQL Server ,Postgres, Cassandra, Hadoop, and Spark for distributed data storage and parallel computing
    • Java and Groovy for our back-end applications and data integration tools
    • Python for data processing and analysis
    • Cloud infrastructure based on AWS EC2 and S3
  • 7+ years IT experience, 2+ years’ experience in Palantir Foundry Platform, 4+ years’ experience in Big Data platform
  • 5+ years of Python and Pyspark development experience
  • Strong troubleshooting and problem solving skills
  • BTech or master's degree in computer science or a related technical field
  • Experience designing, building, and maintaining big data pipelines systems
  • Hands-on experience on Palantir Foundry Platform and Foundry custom Apps development
  • Able to design and implement data integration between Palantir Foundry and external Apps based on Foundry data connector framework
  • Hands-on in programming languages primarily Python, R, Java, Unix shell scripts
  • Hand-on experience in AWS / Azure cloud platform and stack
  • Strong in API based architecture and concept, able to do quick PoC using API integration and development
  • Knowledge of machine learning and AI
  • Skill and comfort working in a rapidly changing environment with dynamic objectives and iteration with users.

 Demonstrated ability to continuously learn, work independently, and make decisions with minimal supervision

Read more
Remote, Noida, NCR (Delhi | Gurgaon | Noida)
8 - 15 yrs
₹30L - ₹45L / yr
skill iconData Science
Natural Language Processing (NLP)
skill iconMachine Learning (ML)
skill iconDeep Learning
Predictive modelling

Responsibilities: 

  • Identify complex business problems and work towards building analytical solutions in-order to create large business impact.
  • Demonstrate leadership through innovation in software and data products from ideation/conception through design, development and ongoing enhancement, leveraging user research techniques, traditional data tools, and techniques from the data science toolkit such as predictive modelling, NLP, statistical analysis, vector space modelling, machine learning etc.
  • Collaborate and ideate with cross-functional teams to identify strategic questions for the business that can be solved and champion the effectiveness of utilizing data, analytics, and insights to shape business.
  • Contribute to company growth efforts, increasing revenue and supporting other key business outcomes using analytics techniques.
  • Focus on driving operational efficiencies by use of data and analytics to impact cost and employee efficiency.
  • Baseline current analytics capability, ensure optimum utilization and continued advancement to stay abridge with industry developments.
  • Establish self as a strategic partner with stakeholders, focused on full innovation system and fully supportive of initiatives from early stages to activation.
  • Review stakeholder objectives and team's recommendations to ensure alignment and understanding.
  • Drive analytics thought leadership and effectively contributes towards transformational initiatives.
  • Ensure accuracy of data and deliverables of reporting employees with comprehensive policies and processes.
Read more
Cemtics

at Cemtics

1 recruiter
Tapan Sahani
Posted by Tapan Sahani
Remote, NCR (Delhi | Gurgaon | Noida)
4 - 6 yrs
₹5L - ₹12L / yr
Big Data
Spark
Hadoop
SQL
skill iconPython
+1 more

JD:

Required Skills:

  • Intermediate to Expert level hands-on programming using one of programming language- Java or Python or Pyspark or Scala.
  • Strong practical knowledge of SQL.
    Hands on experience on Spark/SparkSQL
  • Data Structure and Algorithms
  • Hands-on experience as an individual contributor in Design, Development, Testing and Deployment of Big Data technologies based applications
  • Experience in Big Data application tools, such as Hadoop, MapReduce, Spark, etc
  • Experience on NoSQL Databases like HBase, etc
  • Experience with Linux OS environment (Shell script, AWK, SED)
  • Intermediate RDBMS skill, able to write SQL query with complex relation on top of big RDMS (100+ table)
Read more
Spotmentor Technologies

at Spotmentor Technologies

4 recruiters
Arpit Goyal
Posted by Arpit Goyal
Bengaluru (Bangalore), NCR (Delhi | Gurgaon | Noida)
3 - 7 yrs
₹20L - ₹30L / yr
skill iconData Science
skill iconMachine Learning (ML)
skill iconDeep Learning
Natural Language Processing (NLP)
JOB DESCRIPTION We're looking for Head, Machine learning (3+ years experience) for our company - Spotmentor Technologies. Right now our Technology team has 5 members and this is a head team member role and carries significant equity with it. We need someone who can lead the Machine learning function with both vision and hands-on work and is excited to use this area to develop B2B products for enterprise productivity. RESPONSIBILITIES • Collaborate with cross-functional team members to develop software libraries, tools, and methodologies as critical components of our computation platforms. • Also responsible for software profiling, performance tuning and analysis, and other general software engineering tasks. • Use independent judgment to take existing code, understand its function, and change/enhance as needed. • Work as a team leader rather than a member. REQUIREMENTS • Proficient in Python with sound knowledge in the machine learning libraries namely Scikit-learn, Numpy, Pandas, NLTK etc. • Experience with Deep Learning tools like TensorFlow, Keras, PyTorch etc and integrating using open source learning platforms is required. • Prior experience in building a fully functional Machine Learning Algorithm in the text analysis and multi-class classification with promising results. • Expert data scientist with professionalism in text classification, text analytics, regression and other machine learning algorithms. • Solid grasp of mathematical principles behind machine learning algorithms. • Proficient in using version control tools (Git, Mercurial etc). • Prior experience of using big data technologies like Hadoop, Spark etc. • Semantic Web experience is a big plus. • Should be from tier 1 colleges (IIT’s / NIT’s and BITS).
Read more
Sagacito

at Sagacito

2 recruiters
Neha Verma
Posted by Neha Verma
NCR (Delhi | Gurgaon | Noida)
8 - 15 yrs
₹18L - ₹35L / yr
skill iconData Science
skill iconPython
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
skill iconDeep Learning
•Analytics, Big Data, Machine Learning (including deep learning methods): Algorithm design, analysis and development and performance improvement o Strong understanding of statistical and predictive modeling concepts, machine-learning approaches, clustering, classification, regression techniques, and recommendation (collaborative filtering) algorithms Share CV to me at
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort