Cutshort logo
Underwriting Jobs in Chennai

11+ Underwriting Jobs in Chennai | Underwriting Job openings in Chennai

Apply to 11+ Underwriting Jobs in Chennai on CutShort.io. Explore the latest Underwriting Job opportunities across top companies like Google, Amazon & Adobe.

icon
Chennai
17 - 23 yrs
₹1L - ₹15L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+4 more

Purpose

·        The Analytics and BI Head will collaborate closely with leaders across product, sales, and marketing to support and implement high-quality, data-driven decisions.

·        The candidate will ensure data accuracy and consistent reporting by designing optimal processes and procedures for analytical managers and employees to create and follow.

·        The candidate will manage the processes and people responsible for correct data reporting, modeling, and analysis.

 

Key Responsibilities

Responsibilities will include but will not be restricted to:

·        Lead cross-functional projects involving advanced data modeling and analysis techniques and review insights that will guide strategic decisions and uncover optimization opportunities.

·        Managing project budgets and financials, including forecasting, monitoring, and reporting.

·        Providing clear and concise instructions to different teams to ensure quality delivery of analysis.

·        Maintain dashboards and performance metrics support that support key business decisions.

·        Planning and executing strategies for completing projects on time.

·        Determining the need for training and talent development.

·        Recruit, train, develop and supervise managerial level employees.

·        Examine, interpret, and report results of analytical initiatives to stakeholders in leadership, technology, sales, marketing, and product teams.

·        Oversee the deliverable pipeline, including rationalization, de-duplication, and prioritization.

·        Anticipate future demands of initiatives related to people, technology, budget and business across the departments and review solutions to meet these needs.

·        Communicate results and business impacts of insight initiatives to stakeholders within and outside of the company.

 

Technical Requirements (what)

·        Advanced knowledge of data analysis and modeling principles: KPI Tree creation, Reporting best practices, predictive analytics, statistical and ML based modeling techniques

·        Work Experience of 20+ years in analytics inclusive of 5+ years of experience of leading a team. Candidates from Insurance Industry are preferred.

·        Understanding of and experience using analytical concepts and statistical techniques: hypothesis development, designing tests/experiments, analyzing data, drawing conclusions, and developing actionable recommendations for business units.

·        Strong problem solving, quantitative and analytical abilities.

·        Strong ability to plan and manage numerous processes, people, and projects simultaneously.

·        The right candidate will also be proficient and experienced with the following tools/programs:

§ Experience with working on big data environments: Teradata, Aster, Hadoop.

§ Experience with testing tools such as Adobe Test & Target.

§ Experience with data visualization tools: Tableau, Raw, chart.js.

§ Experience with Adobe Analytics and other analytics tools.

 

 

Desired Personal Qualities or Behavior

·        Strong leadership qualities, ability to communicate the vision crisply and drive collaboration and innovation.

·        Must be an initiative-taker.

·        Strong diligence, organizational skills, and the ability to manage multiple projects

·        Strong written and verbal communication skills, able to communicate effectively and in a professional manner with all levels of the company and outside vendors.

·        Ability to work in a diverse team environment and effectively support the demanding needs of the company.

·        Ability to work under pressure, meet deadlines with shifting priorities.


Read more
Client of People First Consultants

Client of People First Consultants

Agency job
Bengaluru (Bangalore), Chennai
10 - 15 yrs
₹18L - ₹23L / yr
Data governance
Informatica
Informatica Data Quality

Qualifications & Skills

  • Proven track record in delivering Data Governance Solutions to a large enterprise
  • Knowledge experience in data governance frameworks, formulating data governance policy, standards and processes
  • Experience in program management and managing cross functional stakeholders from senior leadership to project manager level
  • Experience in leading a team of data governance business analysts
  • Experience in data governance tools like Informatica Data Quality, Enterprise Data Catalog, Axon, Collibra
  • Experience in metadata management, master and reference data management, data quality and data governance
Read more
IT-Startup In Chennai

IT-Startup In Chennai

Agency job
Chennai
3 - 5 yrs
₹12L - ₹20L / yr
skill iconData Science
Data Scientist
skill iconR Programming
skill iconPython
skill iconMachine Learning (ML)
+9 more
  • 3+ years experience in practical implementation and deployment of ML based systems preferred.
  • BE/B Tech or M Tech (preferred) in CS/Engineering with strong mathematical/statistical background
  • Strong mathematical and analytical skills, especially statistical and ML techniques, with familiarity with different supervised and unsupervised learning algorithms
  • Implementation experiences and deep knowledge of Classification, Time Series Analysis, Pattern Recognition, Reinforcement Learning, Deep Learning, Dynamic Programming and Optimisation
  • Experience in working on modeling graph structures related to spatiotemporal systems
  • Programming skills in Python
  • Experience in developing and deploying on cloud (AWS or Google or Azure)
  • Good verbal and written communication skills
  • Familiarity with well-known ML frameworks such as Pandas, Keras, TensorFlow
Read more
Agiletech Info Solutions pvt ltd
Chennai
4 - 8 yrs
₹4L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
Spark
SQL
+1 more
We are looking for a Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoy optimizing data systems and building them from the ground up.

The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.
Responsibilities for Data Engineer
• Create and maintain optimal data pipeline architecture,
• Assemble large, complex data sets that meet functional / non-functional business requirements.
• Identify, design, and implement internal process improvements: automating manual processes,
optimizing data delivery, re-designing infrastructure for greater scalability, etc.
• Build the infrastructure required for optimal extraction, transformation, and loading of data
from a wide variety of data sources using SQL and AWS big data technologies.
• Build analytics tools that utilize the data pipeline to provide actionable insights into customer
acquisition, operational efficiency and other key business performance metrics.
• Work with stakeholders including the Executive, Product, Data and Design teams to assist with
data-related technical issues and support their data infrastructure needs.
• Create data tools for analytics and data scientist team members that assist them in building and
optimizing our product into an innovative industry leader.
• Work with data and analytics experts to strive for greater functionality in our data systems.
Qualifications for Data Engineer
• Experience building and optimizing big data ETL pipelines, architectures and data sets.
• Advanced working SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as working familiarity with a variety of databases.
• Experience performing root cause analysis on internal and external data and processes to
answer specific business questions and identify opportunities for improvement.
• Strong analytic skills related to working with unstructured datasets.
• Build processes supporting data transformation, data structures, metadata, dependency and
workload management.
• A successful history of manipulating, processing and extracting value from large disconnected
datasets.
Read more
Klenty

at Klenty

2 recruiters
Bhuvanesh Ram M
Posted by Bhuvanesh Ram M
Chennai
3 - 6 yrs
₹6L - ₹10L / yr
skill iconMongoDB

JD - MongoDB Engineer

  • Collection creation, access method tuning, sharding implementation, index creation, and debugging query execution to obtain top database performance.
  • Execute DB upgrades on time
  • Configuration of MongoDB instances, Schema and MongoDB modelling
  • Table design, index utilisation, design indexing strategies , query plan analysis and Optimise performance issues
  • Comprehend and translate business requirements into technical specifications and build elegant, efficient, and scalable solutions based on specifications
  • Implement appropriate indexes (B-Tree, Geospatial, Text) for performance improvement
  • Implement Mongo Management Service for automating a variety of tasks, including backup/recovery and performance management
  • Recommend and implement best practices for Rest API integration framework/model
  • Develop MongoDB and API prototypes and proofs of concept
  • Implement optimal backup and recovery and assist developers in detecting performance problems using MMS and Mongo Profiler
  • Work closely with the development teams to implement new features and enhancements and the maintenance and support of existing applications in production
  • Automate routine DB operations with scripting
  • Configure, monitor, and deploy replica sets and upgrade databases through patches
  • Keep clear documentation of the database setup and architecture and plan procedures for backup in case of a data loss
  • Ensure that the databases achieve maximum performance and availability
  • Create roles and users and set their permissions and mentor staff on MongoDB best practices
  • Write procedures for backup and disaster recovery
Read more
Bypro Technologies

at Bypro Technologies

3 recruiters
Karthikeyan Balasundaram
Posted by Karthikeyan Balasundaram
Chennai, Coimbatore
1 - 3 yrs
₹2L - ₹5L / yr
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm

Job Description:

- Understanding about depth and breadth of computer vision and deep learning algorithms.

- At least 2 years of experience in computer vision and or deep learning for object detection and tracking along with semantic or instance segmentation either in the academic or industrial domain.

- Experience with any machine deep learning frameworks like Tensorflow, Keras, Scikit-Learn and PyTorch.

- Experience in training models through GPU computing using NVIDIA CUDA or on the cloud.

- Ability to transform research articles into working solutions to solve real-world problems.

- Strong experience in using both basic and advanced image processing algorithms for feature engineering.

- Proficiency in Python and related packages like numpy, scikit-image, PIL, opencv, matplotlib, seaborn, etc.

- Excellent written and verbal communication skills for effectively communicating with the team and ability to present information to a varied technical and non-technical audiences.

- Must be able to produce solutions independently in an organized manner and also be able to work in a team when required.

- Must have good Object-Oriented Programing & logical analysis skills in Python.
Read more
Tredence
Sharon Joseph
Posted by Sharon Joseph
Bengaluru (Bangalore), Gurugram, Chennai, Pune
7 - 10 yrs
Best in industry
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
skill iconPython
+1 more

Job Summary

As a Data Science Lead, you will manage multiple consulting projects of varying complexity and ensure on-time and on-budget delivery for clients. You will lead a team of data scientists and collaborate across cross-functional groups, while contributing to new business development, supporting strategic business decisions and maintaining & strengthening client base

  1. Work with team to define business requirements, come up with analytical solution and deliver the solution with specific focus on Big Picture to drive robustness of the solution
  2. Work with teams of smart collaborators. Be responsible for their appraisals and career development.
  3. Participate and lead executive presentations with client leadership stakeholders.
  4. Be part of an inclusive and open environment. A culture where making mistakes and learning from them is part of life
  5. See how your work contributes to building an organization and be able to drive Org level initiatives that will challenge and grow your capabilities.

​​​​​​Role & Responsibilities

  1. Serve as expert in Data Science, build framework to develop Production level DS/AI models.
  2. Apply AI research and ML models to accelerate business innovation and solve impactful business problems for our clients.
  3. Lead multiple teams across clients ensuring quality and timely outcomes on all projects.
  4. Lead and manage the onsite-offshore relation, at the same time adding value to the client.
  5. Partner with business and technical stakeholders to translate challenging business problems into state-of-the-art data science solutions.
  6. Build a winning team focused on client success. Help team members build lasting career in data science and create a constant learning/development environment.
  7. Present results, insights, and recommendations to senior management with an emphasis on the business impact.
  8. Build engaging rapport with client leadership through relevant conversations and genuine business recommendations that impact the growth and profitability of the organization.
  9. Lead or contribute to org level initiatives to build the Tredence of tomorrow.

 

Qualification & Experience

  1. Bachelor's /Master's /PhD degree in a quantitative field (CS, Machine learning, Mathematics, Statistics, Data Science) or equivalent experience.
  2. 6-10+ years of experience in data science, building hands-on ML models
  3. Expertise in ML – Regression, Classification, Clustering, Time Series Modeling, Graph Network, Recommender System, Bayesian modeling, Deep learning, Computer Vision, NLP/NLU, Reinforcement learning, Federated Learning, Meta Learning.
  4. Proficient in some or all of the following techniques: Linear & Logistic Regression, Decision Trees, Random Forests, K-Nearest Neighbors, Support Vector Machines ANOVA , Principal Component Analysis, Gradient Boosted Trees, ANN, CNN, RNN, Transformers.
  5. Knowledge of programming languages SQL, Python/ R, Spark.
  6. Expertise in ML frameworks and libraries (TensorFlow, Keras, PyTorch).
  7. Experience with cloud computing services (AWS, GCP or Azure)
  8. Expert in Statistical Modelling & Algorithms E.g. Hypothesis testing, Sample size estimation, A/B testing
  9. Knowledge in Mathematical programming – Linear Programming, Mixed Integer Programming etc , Stochastic Modelling – Markov chains, Monte Carlo, Stochastic Simulation, Queuing Models.
  10. Experience with Optimization Solvers (Gurobi, Cplex) and Algebraic programming Languages(PulP)
  11. Knowledge in GPU code optimization, Spark MLlib Optimization.
  12. Familiarity to deploy and monitor ML models in production, delivering data products to end-users.
  13. Experience with ML CI/CD pipelines.
Read more
Tredence
Rohit S
Posted by Rohit S
Chennai, Pune, Bengaluru (Bangalore), Gurugram
11 - 16 yrs
₹20L - ₹32L / yr
Data Warehouse (DWH)
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Data engineering
Data migration
+1 more
• Engages with Leadership of Tredence’s clients to identify critical business problems, define the need for data engineering solutions and build strategy and roadmap
• S/he possesses a wide exposure to complete lifecycle of data starting from creation to consumption
• S/he has in the past built repeatable tools / data-models to solve specific business problems
• S/he should have hand-on experience of having worked on projects (either as a consultant or with in a company) that needed them to
o Provide consultation to senior client personnel o Implement and enhance data warehouses or data lakes.
o Worked with business teams or was a part of the team that implemented process re-engineering driven by data analytics/insights
• Should have deep appreciation of how data can be used in decision-making
• Should have perspective on newer ways of solving business problems. E.g. external data, innovative techniques, newer technology
• S/he must have a solution-creation mindset.
Ability to design and enhance scalable data platforms to address the business need
• Working experience on data engineering tool for one or more cloud platforms -Snowflake, AWS/Azure/GCP
• Engage with technology teams from Tredence and Clients to create last mile connectivity of the solutions
o Should have experience of working with technology teams
• Demonstrated ability in thought leadership – Articles/White Papers/Interviews
Mandatory Skills Program Management, Data Warehouse, Data Lake, Analytics, Cloud Platform
Read more
Tire1 company

Tire1 company

Agency job
Chennai, Bengaluru (Bangalore)
3 - 12 yrs
₹1L - ₹12L / yr
skill iconData Analytics
SOX 404
Audit
Qlikview
PowerBI
+2 more

JOB SUMMARY:  The Senior Associate supports the Data Analytics Manager by proposing relevant analytics procedures/tools, executing the analytics and also developing visualization outputs for audits, continuous monitoring/auditing and IA initiatives. The individual’s responsibilities include -

Understanding audit and/or project objectives and assisting the manager in preparing the plan and timelines.

Working with the Process/BU/IA teams for gathering requirements for continuous monitoring/auditing projects.

Working with Internal audit project teams to understand the analytics requirements for audit engagements.

Independently build pilot/prototype, determine appropriate visual tool and design the views to meet project objectives.

Proficient in data management and data mining.

Highly skilled on visualization tools like Qlik View, Qlik Sense, Power BI, Tableau, Alteryx etc.

Working with Data Analytics Manager to develop analytics program aligned to the overall audit plan.

Showcasing analytics capability to Process management teams to increase adoption of continuous monitoring.

Establishing and maintaining relationships with all key stakeholders of internal audit.

Coaching other data analysts on analytics procedures, coding and tools.

Taking a significant and active role in developing and driving Internal Audit Data Analytics quality and knowledge sharing to enhance the value provided to Internal Audit stakeholders.

Ensuring timely and accurate time tracking.

Continuously focusing on self-development by attending trainings, seminars and acquiring relevant certifications.

Read more
Chennai, Hyderabad
5 - 10 yrs
₹10L - ₹25L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more

Bigdata with cloud:

 

Experience : 5-10 years

 

Location : Hyderabad/Chennai

 

Notice period : 15-20 days Max

 

1.  Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight

2.  Experience in developing lambda functions with AWS Lambda

3.  Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark

4.  Should be able to code in Python and Scala.

5.  Snowflake experience will be a plus

Read more
netmedscom

at netmedscom

3 recruiters
Vijay Hemnath
Posted by Vijay Hemnath
Chennai
2 - 5 yrs
₹6L - ₹25L / yr
Big Data
Hadoop
Apache Hive
skill iconScala
Spark
+12 more

We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Roles and Responsibilities:

  • Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
  • Develop programs in Scala and Python as part of data cleaning and processing.
  • Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.  
  • Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
  • Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
  • Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Provide high operational excellence guaranteeing high availability and platform stability.
  • Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Skills:

  • Experience with Big Data pipeline, Big Data analytics, Data warehousing.
  • Experience with SQL/No-SQL, schema design and dimensional data modeling.
  • Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
  • Experience in designing systems that process structured as well as unstructured data at large scale.
  • Experience in AWS/Spark/Java/Scala/Python development.
  • Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
  • Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
  • Prior exposure to streaming data sources such as Kafka.
  • Should have knowledge on Shell Scripting and Python scripting.
  • High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
  • Experience with NoSQL databases such as Cassandra / MongoDB.
  • Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
  • Experience building and deploying applications on on-premise and cloud-based infrastructure.
  • Having a good understanding of machine learning landscape and concepts. 

 

Qualifications and Experience:

Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.

Certifications:

Good to have at least one of the Certifications listed here:

    AZ 900 - Azure Fundamentals

    DP 200, DP 201, DP 203, AZ 204 - Data Engineering

    AZ 400 - Devops Certification

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort