Cutshort logo
DVCS Jobs in Bangalore (Bengaluru)

11+ DVCS Jobs in Bangalore (Bengaluru) | DVCS Job openings in Bangalore (Bengaluru)

Apply to 11+ DVCS Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest DVCS Job opportunities across top companies like Google, Amazon & Adobe.

icon
Synapsica Technologies Pvt Ltd

at Synapsica Technologies Pvt Ltd

6 candid answers
1 video
Human Resources
Posted by Human Resources
Bengaluru (Bangalore)
3 - 5 yrs
₹12L - ₹20L / yr
skill iconPython
CI/CD
DVCS
skill iconMachine Learning (ML)
skill iconKubernetes
+4 more

Introduction

http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis. 

Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting.  We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls 


Your Roles and Responsibilities

We are looking for an experienced MLOps Engineer to join our engineering team and help us create dynamic software applications for our clients. In this role, you will be a key member of a team in decision making, implementations, development and advancement of ML operations of the core AI platform.

 

 

Roles and Responsibilities:

  • Work closely with a cross functional team to serve business goals and objectives.
  • Develop, Implement and Manage MLOps in cloud infrastructure for data preparation,deployment, monitoring and retraining models
  • Design and build application containerisation and orchestrate with Docker and Kubernetes in AWS platform. 
  • Build and maintain code, tools, packages in cloud

Requirements:

  • At Least 2+ years of experience in Data engineering 
  • At Least 3+ yr experience in Python with familiarity in popular ML libraries.
  • At Least 2+ years experience in model serving and pipelines
  • Working knowledge of containers like kubernetes , dockers, in AWS
  • Design distributed systems deployment at scale
  • Hands-on experience in coding and scripting
  • Ability to write effective scalable and modular code.
  • Familiarity with Git workflows, CI CD and NoSQL Mongodb
  • Familiarity with Airflow, DVC and MLflow is a plus
Read more
Bengaluru (Bangalore)
6 - 12 yrs
₹25L - ₹35L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
skill iconDeep Learning
+4 more

Experience Required:

  • 6+ years of data science experience.
  • Demonstrated experience in leading programs.
  • Prior experience in customer data platforms/finance domain is a plus.
  • Demonstrated ability in developing and deploying data-driven products.
  • Experience of working with large datasets and developing scalable algorithms.
  • Hands-on experience of working with tech, product, and operation teams.


Key Responsibilities:

Technical Skills:

  • Deep understanding and hands-on experience of Machine learning and Deep learning algorithms. Good understanding of NLP and LLM concepts and fair experience in developing NLU and NLG solutions.
  • Experience with Keras/TensorFlow/PyTorch deep learning frameworks.
  • Proficient in scripting languages (Python/Shell), SQL.
  • Good knowledge of Statistics.
  • Experience with big data, cloud, and MLOps.

Soft Skills:

  • Strong analytical and problem-solving skills.
  • Excellent presentation and communication skills.
  • Ability to work independently and deal with ambiguity.

Continuous Learning:

  • Stay up to date with emerging technologies.


Qualifications:

A degree in Computer Science, Statistics, Applied Mathematics, Machine Learning, or any related field / B. Tech.

Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Pune, Hyderabad, Gurugram, Noida
5 - 11 yrs
₹20L - ₹36L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.


Role & Responsibilities:

Your role is focused on Design, Development and delivery of solutions involving:

• Data Integration, Processing & Governance

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Implement scalable architectural models for data processing and storage

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode

• Build functionality for data analytics, search and aggregation

Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 5+ years of IT experience with 3+ years in Data related technologies

2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc

6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Cloud data specialty and other related Big data technology certifications


Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes


Read more
42 Cards solution
Agency job
via Qrata by Prajakta Kulkarni
Bengaluru (Bangalore)
10 - 13 yrs
₹25L - ₹28L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+7 more

Job Title:- Head of Analytics

Job Location:- Bangalore - On - site

 

About Qrata:

Qrata matches top talent with global career opportunities from the world’s leading digital companies including some of the world’s fastest growing start-ups using qrata’s talent marketplaces. To sign-up please visit Qrata Talent Sign-Up

 

We are currently scouting for Head of Analytics

 

Our Client Story:

Founded by a team of seasoned bankers with over 120 years of collective experience in banking, financial services and cards, encompassing strategy, operation, marketing, risk & technology, both in India and internationally.

We offer credit card processing Solution that can help you in effectively managing your credit card portfolio end-to-end. These solution are customized to meet the unique strategic, operational and compliance requirements of each bank.

1. Card Programs built for Everyone Limit assignment based on customer risk assessment & credit profiles including secured cards

2. Cards that can be used Everywhere. Through POS machines, UPI, E-Commerce websites

3. A Card for Everything Enable customer purchases, both large and small

4. Customized Card configurations Restrict usage based on merchant codes, location, amount limits etc

5. End-to-End Support We undertake the complete customer life cycle management right from KYC checks, onboarding, risk profiling, fraud control, billing and collections

6. Rewards Program Management We will manage the entire cards reward and customer loyalty programs for you

 

What you will do:

We are seeking an experienced individual for the role of Head of Analytics. As the Head of Analytics, you will be responsible for driving data-driven decision-making, implementing advanced analytics strategies, and providing valuable insights to optimize our credit card business operations, sales and marketing, risk management & customer experience. Your expertise in statistical analysis, predictive modelling, and data visualization will be instrumental in driving growth and enhancing the overall performance of our credit card business

 

Qualification:

  • Bachelor's or master’s degree in Technology, Mathematics, Statistics, Economics, Computer Science, or a related field
  • Proven experience (7+ years) in leading analytics teams in the credit card industry
  • Strong expertise in statistical analysis, predictive modelling, data mining, and segmentation techniques
  • Proficiency in data manipulation and analysis using programming languages such as Python, R, or SQL
  • Experience with analytics tools such as SAS, SPSS, or Tableau
  • Excellent leadership and team management skills, with a track record of building and developing high-performing teams
  • Strong knowledge of credit card business and understanding of credit card industry dynamics, including risk management, marketing, and customer lifecycle
  • Exceptional communication and presentation skills, with the ability to effectively communicate complex information to a varied audience

 

What you can expect:

1. Develop and implement Analytics Strategy:

o Define the analytics roadmap for the credit card business, aligning it with overall business objectives

o Identify key performance indicators (KPIs) and metrics to track the performance of the credit card business

o Collaborate with senior management and cross-functional teams to prioritize and execute analytics initiatives

2. Lead Data Analysis and Insights:

o Conduct in-depth analysis of credit card data, customer behaviour, and market trends to identify opportunities for business growth and risk mitigation

o Develop predictive models and algorithms to assess credit risk, customer segmentation, acquisition, retention, and upsell opportunities

o Generate actionable insights and recommendations based on data analysis to optimize credit card product offerings, pricing, and marketing strategies

o Regularly present findings and recommendations to senior leadership, using data visualization techniques to effectively communicate complex information

3. Drive Data Governance and Quality:

o Oversee data governance initiatives, ensuring data accuracy, consistency, and integrity across relevant systems and platforms

o Collaborate with IT teams to optimize data collection, integration, and storage processes to support advanced analytics capabilities

o Establish and enforce data privacy and security protocols to comply with regulatory requirements

4. Team Leadership and Collaboration:

o Build and manage a high-performing analytics team, fostering a culture of innovation, collaboration, and continuous learning

o Provide guidance and mentorship to the team, promoting professional growth and development

o Collaborate with stakeholders across departments, including Marketing, Risk Management, and Finance, to align analytics initiatives with business objectives

5. Stay Updated on Industry Trends:

o Keep abreast of emerging trends, techniques, and technologies in analytics, credit card business, and the financial industry

o Leverage industry best practices to drive innovation and continuous improvement in analytics methodologies and tools

 

For more Opportunities Visit: Qrata Opportunities.

Read more
Affine Analytics

at Affine Analytics

1 video
1 recruiter
Santhosh M
Posted by Santhosh M
Bengaluru (Bangalore)
4 - 8 yrs
₹10L - ₹30L / yr
Data Warehouse (DWH)
Informatica
ETL
Google Cloud Platform (GCP)
Airflow
+2 more

Objective

Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products


Roles and Responsibilities:

  • Should be comfortable in building and optimizing performant data pipelines which include data ingestion, data cleansing and curation into a data warehouse, database, or any other data platform using DASK/Spark.
  • Experience in distributed computing environment and Spark/DASK architecture.
  • Optimize performance for data access requirements by choosing the appropriate file formats (AVRO, Parquet, ORC etc) and compression codec respectively.
  • Experience in writing production ready code in Python and test, participate in code reviews to maintain and improve code quality, stability, and supportability.
  • Experience in designing data warehouse/data mart.
  • Experience with any RDBMS preferably SQL Server and must be able to write complex SQL queries.
  • Expertise in requirement gathering, technical design and functional documents.
  • Experience in Agile/Scrum practices.
  • Experience in leading other developers and guiding them technically.
  • Experience in deploying data pipelines using automated CI/CD approach.
  • Ability to write modularized reusable code components.
  • Proficient in identifying data issues and anomalies during analysis.
  • Strong analytical and logical skills.
  • Must be able to comfortably tackle new challenges and learn.
  • Must have strong verbal and written communication skills.


Required skills:

  • Knowledge on GCP
  • Expertise in Google BigQuery
  • Expertise in Airflow
  • Good Hands on SQL
  • Data warehousing concepts
Read more
Persistent Systems

at Persistent Systems

1 video
1 recruiter
Agency job
via Milestone Hr Consultancy by Haina khan
Pune, Bengaluru (Bangalore), Hyderabad, Nagpur
4 - 9 yrs
₹4L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+3 more
Greetings..

We have an urgent requirements of Big Data Developer profiles in our reputed MNC company.

Location: Pune/Bangalore/Hyderabad/Nagpur
Experience: 4-9yrs

Skills: Pyspark,AWS
or Spark,Scala,AWS
or Python Aws
Read more
Velocity Services

at Velocity Services

2 recruiters
Newali Hazarika
Posted by Newali Hazarika
Bengaluru (Bangalore)
4 - 9 yrs
₹15L - ₹35L / yr
ETL
Informatica
Data Warehouse (DWH)
Data engineering
Oracle
+7 more

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 5+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 5+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

Read more
MNC
Bengaluru (Bangalore)
4 - 7 yrs
₹25L - ₹28L / yr
skill iconData Science
Data Scientist
skill iconR Programming
skill iconPython
SQL
  • Banking Domain
  • Assist the team in building Machine learning/AI/Analytics models on open-source stack using Python and the Azure cloud stack.
  • Be part of the internal data science team at fragma data - that provides data science consultation to large organizations such as Banks, e-commerce Cos, Social Media companies etc on their scalable AI/ML needs on the cloud and help build POCs, and develop Production ready solutions.
  • Candidates will be provided with opportunities for training and professional certifications on the job in these areas - Azure Machine learning services, Microsoft Customer Insights, Spark, Chatbots, DataBricks, NoSQL databases etc.
  • Assist the team in conducting AI demos, talks, and workshops occasionally to large audiences of senior stakeholders in the industry.
  • Work on large enterprise scale projects end-to-end, involving domain specific projects across banking, finance, ecommerce, social media etc.
  • Keen interest to learn new technologies and latest developments and apply them to projects assigned.
Desired Skills
  • Professional Hands-on coding experience in python for over 1 year for Data scientist, and over 3 years for Sr Data Scientist. 
  • This is primarily a programming/development-oriented role - hence strong programming skills in writing object-oriented and modular code in python and experience of pushing projects to production is important.
  • Strong foundational knowledge and professional experience in 
  • Machine learning, (Compulsory)
  • Deep Learning (Compulsory)
  • Strong knowledge of At least One of : Natural Language Processing or Computer Vision or Speech Processing or Business Analytics
  • Understanding of Database technologies and SQL. (Compulsory)
  • Knowledge of the following Frameworks:
  • Scikit-learn (Compulsory)
  • Keras/tensorflow/pytorch (At least one of these is Compulsory)
  • API development in python for ML models (good to have)
  • Excellent communication skills.
  • Excellent communication skills are necessary to succeed in this role, as this is a role with high external visibility, and with multiple opportunities to present data science results to a large external audience that will include external VPs, Directors, CXOs etc.  
  • Hence communication skills will be a key consideration in the selection process.
Read more
Dataweave Pvt Ltd

at Dataweave Pvt Ltd

32 recruiters
Megha M
Posted by Megha M
Bengaluru (Bangalore)
3 - 7 yrs
Best in industry
skill iconPython
Data Structures
Algorithms
Web Scraping
Relevant set of skills
● Good communication and collaboration skills with 4-7 years of experience.
● Ability to code and script with strong grasp of CS fundamentals, excellent problem solving abilities.
● Comfort with frequent, incremental code testing and deployment, Data management skills
● Good understanding of RDBMS
● Experience in building Data pipelines and processing large datasets .
● Knowledge of building Web Scraping and data mining is a plus.
● Working knowledge of open source tools such as mysql, Solr, ElasticSearch, Cassandra ( data stores )
would be a plus.
● Expert in Python programming
Role and responsibilities
● Inclined towards working in a start-up environment.
● Comfort with frequent, incremental code testing and deployment, Data management skills
● Design and Build robust and scalable data engineering solutions for structured and unstructured data for
delivering business insights, reporting and analytics.
● Expertise in troubleshooting, debugging, data completeness and quality issues and scaling overall
system performance.
● Build robust API ’s that powers our delivery points (Dashboards, Visualizations and other integrations).
Read more
Infogain
Agency job
via Technogen India PvtLtd by RAHUL BATTA
Bengaluru (Bangalore), Pune, Noida, NCR (Delhi | Gurgaon | Noida)
7 - 10 yrs
₹20L - ₹25L / yr
Data engineering
skill iconPython
SQL
Spark
PySpark
+10 more
  1. Sr. Data Engineer:

 Core Skills – Data Engineering, Big Data, Pyspark, Spark SQL and Python

Candidate with prior Palantir Cloud Foundry OR Clinical Trial Data Model background is preferred

Major accountabilities:

  • Responsible for Data Engineering, Foundry Data Pipeline Creation, Foundry Analysis & Reporting, Slate Application development, re-usable code development & management and Integrating Internal or External System with Foundry for data ingestion with high quality.
  • Have good understanding on Foundry Platform landscape and it’s capabilities
  • Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.
  • Defines company data assets (data models), Pyspark, spark SQL, jobs to populate data models.
  • Designs data integrations and data quality framework.
  • Design & Implement integration with Internal, External Systems, F1 AWS platform using Foundry Data Connector or Magritte Agent
  • Collaboration with data scientists, data analyst and technology teams to document and leverage their understanding of the Foundry integration with different data sources - Actively participate in agile work practices
  • Coordinating with Quality Engineer to ensure the all quality controls, naming convention & best practices have been followed

Desired Candidate Profile :

  • Strong data engineering background
  • Experience with Clinical Data Model is preferred
  • Experience in
    • SQL Server ,Postgres, Cassandra, Hadoop, and Spark for distributed data storage and parallel computing
    • Java and Groovy for our back-end applications and data integration tools
    • Python for data processing and analysis
    • Cloud infrastructure based on AWS EC2 and S3
  • 7+ years IT experience, 2+ years’ experience in Palantir Foundry Platform, 4+ years’ experience in Big Data platform
  • 5+ years of Python and Pyspark development experience
  • Strong troubleshooting and problem solving skills
  • BTech or master's degree in computer science or a related technical field
  • Experience designing, building, and maintaining big data pipelines systems
  • Hands-on experience on Palantir Foundry Platform and Foundry custom Apps development
  • Able to design and implement data integration between Palantir Foundry and external Apps based on Foundry data connector framework
  • Hands-on in programming languages primarily Python, R, Java, Unix shell scripts
  • Hand-on experience in AWS / Azure cloud platform and stack
  • Strong in API based architecture and concept, able to do quick PoC using API integration and development
  • Knowledge of machine learning and AI
  • Skill and comfort working in a rapidly changing environment with dynamic objectives and iteration with users.

 Demonstrated ability to continuously learn, work independently, and make decisions with minimal supervision

Read more
Bengaluru (Bangalore)
1 - 3 yrs
₹10L - ₹12L / yr
skill iconPython
skill iconMachine Learning (ML)
R
  • Proficient in R and Python
  • Work experience 1+ years with at least 6 months working with Python
  • Prior experience with building ML models
  • Prior experience with SQL
  • Knowledge of statistical techniques
  • Experience with working on Spatial Data will be an added advantage
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort