Cutshort logo
Senior Project Manager

Senior Project Manager

at Fragma Data Systems

DP
Posted by Evelyn Charles
icon
Remote, Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹10L - ₹15L / yr
icon
Full time
Skills
Project Management
Data Analytics
Data Visualization
PowerBI
Tableau
Qlikview
Spotfire
SQL
Banking
  • Gathering project requirements from customers and supporting their requests.
  • Creating project estimates and scoping the solution based on clients’ requirements.
  • Delivery on key project milestones in line with project Plan/ Budget.
  • Establishing individual project plans and working with the team in prioritizing production schedules.
  • Communication of milestones with the team and to clients via scheduled work-in-progress meetings
  • Designing and documenting product requirements.
  • Possess good analytical skills - detail-orientemd
  • Be familiar with Microsoft applications and working knowledge of MS Excel
  • Knowledge of MIS Reports & Dashboards
  • Maintaining strong customer relationships with a positive, can-do attitude
Read more

About Fragma Data Systems

Founded
2015
Type
Size
Stage
Profitable
About

Fragma is a leading Big data, AI and Advanced analytics company provideing services global clients.

Read more
Connect with the team
icon
Mallikarjun Degul
icon
Sandhya JD
icon
Varun Reddy
icon
Priyanka U
icon
Simpy kumari
icon
Minakshi Kumari
icon
Latha Yuvaraj
icon
Vamsikrishna G
Company social profiles
icon
icon
icon
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

client of peoplefirst consultants
Agency job
via People First Consultants by Aishwarya KA
Remote, Chennai
3 - 6 yrs
Best in industry
Machine Learning (ML)
Data Science
Deep Learning
Artificial Intelligence (AI)
Python
+1 more

Skills: Machine Learning,Deep Learning,Artificial Intelligence,python.

Location:Chennai


Domain knowledge:
Data cleaning, modelling, analytics, statistics, machine learning, AI

Requirements:

·         To be part of Digital Manufacturing and Industrie 4.0 projects across Saint Gobain group of companies

·         Design and develop AI//ML models to be deployed across SG factories

·         Knowledge on Hadoop, Apache Spark, MapReduce, Scala, Python programming, SQL and NoSQL databases is required

·         Should be strong in statistics, data analysis, data modelling, machine learning techniques and Neural Networks

·         Prior experience in developing AI and ML models is required

·         Experience with data from the Manufacturing Industry would be a plus

Roles and Responsibilities:

·         Develop AI and ML models for the Manufacturing Industry with a focus on Energy, Asset Performance Optimization and Logistics

·         Multitasking, good communication necessary

·         Entrepreneurial attitude.

 
Read more
at ketteq
1 recruiter
DP
Posted by Nikhil Jain
Remote only
5 - 15 yrs
₹20L - ₹35L / yr
ETL
SQL
PostgreSQL

ketteQ is a supply chain planning and automation platform. We are looking for extremely strong and experienced Technical Consultant to help with system design, data engineering and software configuration and testing during the implementation of supply chain planning solutions. This job comes with a very attractive compensation package, and work-from-home benefit. If you are high-energy, motivated, and initiative-taking individual then this could be a fantastic opportunity for you.

 

Responsible for technical design and implementation of supply chain planning solutions.   

 

 

Responsibilities

  • Design and document system architecture
  • Design data mappings
  • Develop integrations
  • Test and validate data
  • Develop customizations
  • Deploy solution
  • Support demo development activities

Requirements

  • Minimum 5 years experience in technical implementation of Enterprise software preferably Supply Chain Planning software
  • Proficiency in ANSI/postgreSQL
  • Proficiency in ETL tools such as Pentaho, Talend, Informatica, and Mulesoft
  • Experience with Webservices and REST APIs
  • Knowledge of AWS
  • Salesforce and Tableau experience a plus
  • Excellent analytical skills
  • Must possess excellent verbal and written communication skills and be able to communicate effectively with international clients
  • Must be a self-starter and highly motivated individual who is looking to make a career in supply chain management
  • Quick thinker with proven decision-making and organizational skills
  • Must be flexible to work non-standard hours to accommodate globally dispersed teams and clients

Education

  • Bachelors in Engineering from a top-ranked university with above average grades
Read more
at RedSeer Consulting
2 recruiters
DP
Posted by Raunak Swarnkar
Bengaluru (Bangalore)
0 - 2 yrs
₹10L - ₹15L / yr
Python
PySpark
SQL
pandas
Cloud Computing
+2 more

BRIEF DESCRIPTION:

At-least 1 year of Python, Spark, SQL, data engineering experience

Primary Skillset: PySpark, Scala/Python/Spark, Azure Synapse, S3, RedShift/Snowflake

Relevant Experience: Legacy ETL job Migration to AWS Glue / Python & Spark combination

 

ROLE SCOPE:

Reverse engineer the existing/legacy ETL jobs

Create the workflow diagrams and review the logic diagrams with Tech Leads

Write equivalent logic in Python & Spark

Unit test the Glue jobs and certify the data loads before passing to system testing

Follow the best practices, enable appropriate audit & control mechanism

Analytically skillful, identify the root causes quickly and efficiently debug issues

Take ownership of the deliverables and support the deployments

 

REQUIREMENTS:

Create data pipelines for data integration into Cloud stacks eg. Azure Synapse

Code data processing jobs in Azure Synapse Analytics, Python, and Spark

Experience in dealing with structured, semi-structured, and unstructured data in batch and real-time environments.

Should be able to process .json, .parquet and .avro files

 

PREFERRED BACKGROUND:

Tier1/2 candidates from IIT/NIT/IIITs

However, relevant experience, learning attitude takes precedence

Read more
DP
Posted by Vineeta Bajaj
Bengaluru (Bangalore), Mumbai
5 - 8 yrs
₹5L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
SQL
Python
+7 more

The Nitty-Gritties

Location: Bengaluru/Mumbai

About the Role:

Freight Tiger is growing exponentially, and technology is at the centre of it. Our Engineers love solving complex industry problems by building modular and scalable solutions using cutting-edge technology. Your peers will be an exceptional group of Software Engineers, Quality Assurance Engineers, DevOps Engineers, and Infrastructure and Solution Architects.

This role is responsible for developing data pipelines and data engineering components to support strategic initiatives and ongoing business processes. This role works with leads, analysts, and data scientists to understand requirements, develop technical solutions, and ensure the reliability and performance of the data engineering solutions.

This role provides an opportunity to directly impact business outcomes for sales, underwriting, claims and operations functions across multiple use cases by providing them data for their analytical modelling needs.

Key Responsibilities

  • Create and maintain a data pipeline.
  • Build and deploy ETL infrastructure for optimal data delivery.
  • Work with various product, design and executive teams to troubleshoot data-related issues.
  • Create tools for data analysts and scientists to help them build and optimise the product.
  • Implement systems and processes for data access controls and guarantees.
  • Distil the knowledge from experts in the field outside the org and optimise internal data systems.




Preferred Qualifications/Skills

  • Should have 5+ years of relevant experience.
  • Strong analytical skills.
  • Degree in Computer Science, Statistics, Informatics, Information Systems.
  • Strong project management and organisational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • SQL guru with hands-on experience on various databases.
  • NoSQL databases like Cassandra, and MongoDB.
  • Experience with Snowflake, Redshift.
  • Experience with tools like Airflow, and Hevo.
  • Experience with Hadoop, Spark, Kafka, and Flink.
  • Programming experience in Python, Java, and Scala.
Read more
Remote, Bengaluru (Bangalore), Hyderabad
0 - 1 yrs
₹2.5L - ₹4L / yr
SQL
Data engineering
Big Data
Python
● Hands-on Work experience as a Python Developer
● Hands-on Work experience in SQL/PLSQL
● Expertise in at least one popular Python framework (like Django,
Flask or Pyramid)
● Knowledge of object-relational mapping (ORM)
● Familiarity with front-end technologies (like JavaScript and HTML5)
● Willingness to learn & upgrade to Big data and cloud technologies
like Pyspark Azure etc.
● Team spirit
● Good problem-solving skills
● Write effective, scalable code
Read more
at Fragma Data Systems
8 recruiters
DP
Posted by Priyanka U
Remote only
4 - 10 yrs
₹12L - ₹23L / yr
Informatica
ETL
Big Data
Spark
SQL
Skill:- informatica with big data management
 
1.Minimum 6 to 8 years of experience in informatica BDM development
2. Experience working on Spark/SQL
3. Develops informtica mapping/Sql 
4. Should have experience in Hadoop, spark etc

Work days- Sun-Thu
Day shift
 
 
 
Read more
at Syrencloud
3 recruiters
DP
Posted by Samarth Patel
Hyderabad
3 - 7 yrs
₹5L - ₹8L / yr
Data Analytics
Data analyst
SQL
SAP
Our growing technology firm is looking for an experienced Data Analyst who is able to turn project requirements into custom-formatted data reports. The ideal candidate for this position is able to do complete life cycle data generation and outline critical information for each Project Manager. We also need someone who is able to analyze business procedures and recommend specific types of data that can be used to improve upon them.
Read more
fintech
Agency job
via Talentojcom by Raksha Pant
Remote only
2 - 6 yrs
₹9L - ₹30L / yr
ETL
Druid Database
Java
Scala
SQL
+2 more
● Education in a science, technology, engineering, or mathematics discipline, preferably a
bachelor’s degree or equivalent experience
● Knowledge of database fundamentals and fluency in advanced SQL, including concepts
such as windowing functions
● Knowledge of popular scripting languages for data processing such as Python, as well as
familiarity with common frameworks such as Pandas
● Experience building streaming ETL pipelines with tools such as Apache Flink, Apache
Beam, Google Cloud Dataflow, DBT and equivalents
● Experience building batch ETL pipelines with tools such as Apache Airflow, Spark, DBT, or
custom scripts
● Experience working with messaging systems such as Apache Kafka (and hosted
equivalents such as Amazon MSK), Apache Pulsar
● Familiarity with BI applications such as Tableau, Looker, or Superset
● Hands on coding experience in Java or Scala
Read more
at Yulu Bikes
1 video
3 recruiters
DP
Posted by Keerthana k
Bengaluru (Bangalore)
1 - 2 yrs
₹7L - ₹12L / yr
Data Science
Data Analytics
SQL
Python
Datawarehousing
+2 more
Skill Set 
SQL, Python, Numpy,Pandas,Knowledge of Hive and Data warehousing concept will be a plus point.

JD 

- Strong analytical skills with the ability to collect, organise, analyse and interpret trends or patterns in complex data sets and provide reports & visualisations.

- Work with management to prioritise business KPIs and information needs Locate and define new process improvement opportunities.

- Technical expertise with data models, database design and development, data mining and segmentation techniques

- Proven success in a collaborative, team-oriented environment

- Working experience with geospatial data will be a plus.
Read more
at cemtics
1 recruiter
DP
Posted by Tapan Sahani
Remote, NCR (Delhi | Gurgaon | Noida)
4 - 6 yrs
₹5L - ₹12L / yr
Big Data
Spark
Hadoop
SQL
Python
+1 more

JD:

Required Skills:

  • Intermediate to Expert level hands-on programming using one of programming language- Java or Python or Pyspark or Scala.
  • Strong practical knowledge of SQL.
    Hands on experience on Spark/SparkSQL
  • Data Structure and Algorithms
  • Hands-on experience as an individual contributor in Design, Development, Testing and Deployment of Big Data technologies based applications
  • Experience in Big Data application tools, such as Hadoop, MapReduce, Spark, etc
  • Experience on NoSQL Databases like HBase, etc
  • Experience with Linux OS environment (Shell script, AWK, SED)
  • Intermediate RDBMS skill, able to write SQL query with complex relation on top of big RDMS (100+ table)
Read more
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Fragma Data Systems?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort