Cutshort logo

11+ FTP Jobs in Mumbai | FTP Job openings in Mumbai

Apply to 11+ FTP Jobs in Mumbai on CutShort.io. Explore the latest FTP Job opportunities across top companies like Google, Amazon & Adobe.

icon
Wellness Forever Medicare Private Limited
Mumbai
3 - 5 yrs
₹7L - ₹11L / yr
Data Warehouse (DWH)
Informatica
ETL
SQL server
Microsoft Windows Azure
+4 more
  • Minimum 3-4 years of experience with ETL tools, SQL, SSAS & SSIS
  • Good understanding of Data Governance, including Master Data Management (MDM) and Data Quality tools and processes
  • Knowledge of programming languages eg. JASON, Python, R
  • Hands on experience of SQL database design
  • Experience working with REST API
  • Influencing and supporting project delivery through involvement in project/sprint planning and QA
  • Working experience with Azure
  • Stakeholder management
  • Good communication skills
Read more
UpSolve Solutions LLP
Shaurya Kuchhal
Posted by Shaurya Kuchhal
Mumbai
2 - 4 yrs
₹7L - ₹11L / yr
skill iconMachine Learning (ML)
skill iconData Science
Microsoft Windows Azure
Google Cloud Platform (GCP)
skill iconPython
+3 more

About UpSolve

Work on cutting-edge tech stack. Build innovative solutions. Computer Vision, NLP, Video Analytics and IOT.


Job Role

  • Ideate use cases to include recent tech releases.
  • Discuss business plans and assist teams in aligning with dynamic KPIs.
  • Design solution architecture from input to infrastructure and services used to data store.


Job Requirements

  • Working knowledge about Azure Cognitive Services.
  • Project Experience in building AI solutions like Chatbots, sentiment analysis, Image Classification, etc.
  • Quick Learner and Problem Solver.


Job Qualifications

  • Work Experience: 2 years +
  • Education: Computer Science/IT Engineer
  • Location: Mumbai
Read more
Personal Care Product Manufacturing
Mumbai
3 - 8 yrs
₹12L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+9 more

DATA ENGINEER


Overview

They started with a singular belief - what is beautiful cannot and should not be defined in marketing meetings. It's defined by the regular people like us, our sisters, our next-door neighbours, and the friends we make on the playground and in lecture halls. That's why we stand for people-proving everything we do. From the inception of a product idea to testing the final formulations before launch, our consumers are a part of each and every process. They guide and inspire us by sharing their stories with us. They tell us not only about the product they need and the skincare issues they face but also the tales of their struggles, dreams and triumphs. Skincare goes deeper than skin. It's a form of self-care for many. Wherever someone is on this journey, we want to cheer them on through the products we make, the content we create and the conversations we have. What we wish to build is more than a brand. We want to build a community that grows and glows together - cheering each other on, sharing knowledge, and ensuring people always have access to skincare that really works.

 

Job Description:

We are seeking a skilled and motivated Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, developing, and maintaining the data infrastructure and systems that enable efficient data collection, storage, processing, and analysis. You will collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to implement data pipelines and ensure the availability, reliability, and scalability of our data platform.


Responsibilities:

Design and implement scalable and robust data pipelines to collect, process, and store data from various sources.

Develop and maintain data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation.

Optimize and tune the performance of data systems to ensure efficient data processing and analysis.

Collaborate with data scientists and analysts to understand data requirements and implement solutions for data modeling and analysis.

Identify and resolve data quality issues, ensuring data accuracy, consistency, and completeness.

Implement and maintain data governance and security measures to protect sensitive data.

Monitor and troubleshoot data infrastructure, perform root cause analysis, and implement necessary fixes.

Stay up-to-date with emerging technologies and industry trends in data engineering and recommend their adoption when appropriate.


Qualifications:

Bachelor’s or higher degree in Computer Science, Information Systems, or a related field.

Proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems.

Strong programming skills in languages such as Python, Java, or Scala.

Experience with big data technologies and frameworks like Hadoop, Spark, or Kafka.

Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle).

Familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery).

Solid understanding of data modeling, data warehousing, and ETL principles.

Knowledge of data integration techniques and tools (e.g., Apache Nifi, Talend, or Informatica).

Strong problem-solving and analytical skills, with the ability to handle complex data challenges.

Excellent communication and collaboration skills to work effectively in a team environment.


Preferred Qualifications:

Advanced knowledge of distributed computing and parallel processing.

Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink).

Familiarity with machine learning concepts and frameworks (e.g., TensorFlow, PyTorch).

Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes).

Experience with data visualization and reporting tools (e.g., Tableau, Power BI).

Certification in relevant technologies or data engineering disciplines.



Read more
This opening is with an MNC
Mumbai, Malad, andheri
8 - 13 yrs
₹13L - ₹22L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+8 more

Minimum of 8 years of experience of which, 4 years should be of applied data mining

experience in disciplines such as Call Centre Metrics.

 Strong experience in advanced statistics and analytics including segmentation, modelling, regression, forecasting etc.

 Experience with leading and managing large teams.

 Demonstrated pattern of success in using advanced quantitative analytic methods to solve business problems.

 Demonstrated experience with Business Intelligence/Data Mining tools to work with

data, investigate anomalies, construct data sets, and build models.

 Critical to share details on projects undertaken (preferably on telecom industry)

specifically through analysis from CRM.

Read more
Oneture Technologies

at Oneture Technologies

1 recruiter
Ravi Mevcha
Posted by Ravi Mevcha
Mumbai, Navi Mumbai
2 - 4 yrs
₹8L - ₹12L / yr
Spark
Big Data
ETL
Data engineering
ADF
+4 more

Job Overview


We are looking for a Data Engineer to join our data team to solve data-driven critical

business problems. The hire will be responsible for expanding and optimizing the existing

end-to-end architecture including the data pipeline architecture. The Data Engineer will

collaborate with software developers, database architects, data analysts, data scientists and platform team on data initiatives and will ensure optimal data delivery architecture is

consistent throughout ongoing projects. The right candidate should have hands on in

developing a hybrid set of data-pipelines depending on the business requirements.

Responsibilities

  • Develop, construct, test and maintain existing and new data-driven architectures.
  • Align architecture with business requirements and provide solutions which fits best
  • to solve the business problems.
  • Build the infrastructure required for optimal extraction, transformation, and loading
  • of data from a wide variety of data sources using SQL and Azure ‘big data’
  • technologies.
  • Data acquisition from multiple sources across the organization.
  • Use programming language and tools efficiently to collate the data.
  • Identify ways to improve data reliability, efficiency and quality
  • Use data to discover tasks that can be automated.
  • Deliver updates to stakeholders based on analytics.
  • Set up practices on data reporting and continuous monitoring

Required Technical Skills

  • Graduate in Computer Science or in similar quantitative area
  • 1+ years of relevant work experience as a Data Engineer or in a similar role.
  • Advanced SQL knowledge, Data-Modelling and experience working with relational
  • databases, query authoring (SQL) as well as working familiarity with a variety of
  • databases.
  • Experience in developing and optimizing ETL pipelines, big data pipelines, and datadriven
  • architectures.
  • Must have strong big-data core knowledge & experience in programming using Spark - Python/Scala
  • Experience with orchestrating tool like Airflow or similar
  • Experience with Azure Data Factory is good to have
  • Build processes supporting data transformation, data structures, metadata,
  • dependency and workload management.
  • Experience supporting and working with cross-functional teams in a dynamic
  • environment.
  • Good understanding of Git workflow, Test-case driven development and using CICD
  • is good to have
  • Good to have some understanding of Delta tables It would be advantage if the candidate also have below mentioned experience using
  • the following software/tools:
  • Experience with big data tools: Hadoop, Spark, Hive, etc.
  • Experience with relational SQL and NoSQL databases
  • Experience with cloud data services
  • Experience with object-oriented/object function scripting languages: Python, Scala, etc.
Read more
Mumbai
2 - 5 yrs
₹2L - ₹8L / yr
Data Warehouse (DWH)
Informatica
ETL
Microsoft Windows Azure
Big Data
+1 more
Responsible for the evaluation of cloud strategy and program architecture
2. Responsible for gathering system requirements working together with application architects
and owners
3. Responsible for generating scripts and templates required for the automatic provisioning of
resources
4. Discover standard cloud services offerings, install, and execute processes and standards for
optimal use of cloud service provider offerings
5. Incident Management on IaaS, PaaS, SaaS.
6. Responsible for debugging technical issues inside a complex stack involving virtualization,
containers, microservices, etc.
7. Collaborate with the engineering teams to enable their applications to run
on Cloud infrastructure.
8. Experience with OpenStack, Linux, Amazon Web Services, Microsoft Azure, DevOps, NoSQL
etc will be plus.
9. Design, implement, configure, and maintain various Azure IaaS, PaaS, SaaS services.
10. Deploy and maintain Azure IaaS Virtual Machines and Azure Application and Networking
Services.
11. Optimize Azure billing for cost/performance (VM optimization, reserved instances, etc.)
12. Implement, and fully document IT projects.
13. Identify improvements to IT documentation, network architecture, processes/procedures,
and tickets.
14. Research products and new technologies to increase efficiency of business and operations
15. Keep all tickets and projects updated and track time in a detailed format
16. Should be able to multi-task and work across a range of projects and issues with various
timelines and priorities
Technical:
• Minimum 1 year experience Azure and knowledge on Office365 services preferred.
• Formal education in IT preferred
• Experience with Managed Service business model a major plus
• Bachelor’s degree preferred
Read more
Mumbai
5 - 8 yrs
₹25L - ₹30L / yr
SQL Azure
ADF
Azure data factory
Azure Datalake
Azure Databricks
+13 more
As a hands-on Data Architect, you will be part of a team responsible for building enterprise-grade
Data Warehouse and Analytics solutions that aggregate data across diverse sources and data types
including text, video and audio through to live stream and IoT in an agile project delivery
environment with a focus on DataOps and Data Observability. You will work with Azure SQL
Databases, Synapse Analytics, Azure Data Factory, Azure Datalake Gen2, Azure Databricks, Azure
Machine Learning, Azure Service Bus, Azure Serverless (LogicApps, FunctionApps), Azure Data
Catalogue and Purview among other tools, gaining opportunities to learn some of the most
advanced and innovative techniques in the cloud data space.
You will be building Power BI based analytics solutions to provide actionable insights into customer
data, and to measure operational efficiencies and other key business performance metrics.
You will be involved in the development, build, deployment, and testing of customer solutions, with
responsibility for the design, implementation and documentation of the technical aspects, including
integration to ensure the solution meets customer requirements. You will be working closely with
fellow architects, engineers, analysts, and team leads and project managers to plan, build and roll
out data driven solutions
Expertise:
Proven expertise in developing data solutions with Azure SQL Server and Azure SQL Data Warehouse (now
Synapse Analytics)
Demonstrated expertise of data modelling and data warehouse methodologies and best practices.
Ability to write efficient data pipelines for ETL using Azure Data Factory or equivalent tools.
Integration of data feeds utilising both structured (ex XML/JSON) and flat schemas (ex CSV,TXT,XLSX)
across a wide range of electronic delivery mechanisms (API/SFTP/etc )
Azure DevOps knowledge essential for CI/CD of data ingestion pipelines and integrations.
Experience with object-oriented/object function scripting languages such as Python, Java, JavaScript, C#,
Scala, etc is required.
Expertise in creating technical and Architecture documentation (ex: HLD/LLD) is a must.
Proven ability to rapidly analyse and design solution architecture in client proposals is an added advantage.
Expertise with big data tools: Hadoop, Spark, Kafka, NoSQL databases, stream-processing systems is a plus.
Essential Experience:
5 or more years of hands-on experience in a data architect role with the development of ingestion,
integration, data auditing, reporting, and testing with Azure SQL tech stack.
full data and analytics project lifecycle experience (including costing and cost management of data
solutions) in Azure PaaS environment is essential.
Microsoft Azure and Data Certifications, at least fundamentals, are a must.
Experience using agile development methodologies, version control systems and repositories is a must.
A good, applied understanding of the end-to-end data process development life cycle.
A good working knowledge of data warehouse methodology using Azure SQL.
A good working knowledge of the Azure platform, it’s components, and the ability to leverage it’s
resources to implement solutions is a must.
Experience working in the Public sector or in an organisation servicing Public sector is a must,
Ability to work to demanding deadlines, keep momentum and deal with conflicting priorities in an
environment undergoing a programme of transformational change.
The ability to contribute and adhere to standards, have excellent attention to detail and be strongly driven
by quality.
Desirables:
Experience with AWS or google cloud platforms will be an added advantage.
Experience with Azure ML services will be an added advantage Personal Attributes
Articulated and clear in communications to mixed audiences- in writing, through presentations and one-toone.
Ability to present highly technical concepts and ideas in a business-friendly language.
Ability to effectively prioritise and execute tasks in a high-pressure environment.
Calm and adaptable in the face of ambiguity and in a fast-paced, quick-changing environment
Extensive experience working in a team-oriented, collaborative environment as well as working
independently.
Comfortable with multi project multi-tasking consulting Data Architect lifestyle
Excellent interpersonal skills with teams and building trust with clients
Ability to support and work with cross-functional teams in a dynamic environment.
A passion for achieving business transformation; the ability to energise and excite those you work with
Initiative; the ability to work flexibly in a team, working comfortably without direct supervision.
Read more
Mumbai
5 - 7 yrs
₹20L - ₹25L / yr
Teradata
Vertica
skill iconPython
DBA
Redshift
+8 more
  • Key responsibility is to design, develop & maintain efficient Data models for the organization maintained to ensure optimal query performance by the consumption layer.
  • Developing, Deploying & maintaining a repository of UDXs written in Java / Python.
  • Develop optimal Data Model design, analyzing complex distributed data deployments, and making recommendations to optimize performance basis data consumption patterns, performance expectations, the query is executed on the tables/databases, etc.
  • Periodic Database health check and maintenance
  • Designing collections in a no-SQL Database for efficient performance
  • Document & maintain data dictionary from various sources to enable data governance
  • Coordination with Business teams, IT, and other stakeholders to provide best-in-class data pipeline solutions, exposing data via APIs, loading in down streams, No-SQL Databases, etc
  • Data Governance Process Implementation and ensuring data security

Requirements

  • Extensive working experience in Designing & Implementing Data models in OLAP Data Warehousing solutions (Redshift, Synapse, Snowflake, Teradata, Vertica, etc).
  • Programming experience using Python / Java.
  • Working knowledge in developing & deploying User-defined Functions (UDXs) using Java / Python.
  • Strong understanding & extensive working experience in OLAP Data Warehousing (Redshift, Synapse, Snowflake, Teradata, Vertica, etc) architecture and cloud-native Data Lake (S3, ADLS, BigQuery, etc) Architecture.
  • Strong knowledge in Design, Development & Performance tuning of 3NF/Flat/Hybrid Data Model.
  • Extensive technical experience in SQL including code optimization techniques.
  • Strung knowledge of database performance and tuning, troubleshooting, and tuning.
  • Knowledge of collection design in any No-SQL DB (DynamoDB, MongoDB, CosmosDB, etc), along with implementation of best practices.
  • Ability to understand business functionality, processes, and flows.
  • Good combination of technical and interpersonal skills with strong written and verbal communication; detail-oriented with the ability to work independently.
  • Any OLAP DWH DBA Experience and User Management will be added advantage.
  • Knowledge in financial industry-specific Data models such as FSLDM, IBM Financial Data Model, etc will be added advantage.
  • Experience in Snowflake will be added advantage.
  • Working experience in BFSI/NBFC & data understanding of Loan/Mortgage data will be added advantage.

Functional knowledge

  • Data Governance & Quality Assurance
  • Modern OLAP Database Architecture & Design
  • Linux
  • Data structures, algorithm & data modeling techniques
  • No-SQL database architecture
  • Data Security

 

Read more
Symansys Technologies India Pvt Ltd
Tanu Chauhan
Posted by Tanu Chauhan
Pune, Mumbai
2 - 8 yrs
₹5L - ₹15L / yr
skill iconData Science
skill iconMachine Learning (ML)
skill iconPython
Tableau
skill iconR Programming
+1 more

Specialism- Advance Analytics, Data Science, regression, forecasting, analytics, SQL, R, python, decision tree, random forest, SAS, clustering classification

Senior Analytics Consultant- Responsibilities

  • Understand business problem and requirements by building domain knowledge and translate to data science problem
  • Conceptualize and design cutting edge data science solution to solve the data science problem, apply design thinking concepts
  • Identify the right algorithms , tech stack , sample outputs required to efficiently adder the end need
  • Prototype and experiment the solution to successfully demonstrate the value
    Independently or with support from team execute the conceptualized solution as per plan by following project management guidelines
  • Present the results to internal and client stakeholder in an easy to understand manner with great story telling, story boarding, insights and visualization
  • Help build overall data science capability for eClerx through support in pilots, pre sales pitches, product development , practice development initiatives
Read more
CRG Solutions Pvt Ltd

at CRG Solutions Pvt Ltd

1 recruiter
Sweety Patyal
Posted by Sweety Patyal
Mumbai
2 - 4 yrs
₹4L - ₹8L / yr
ETL
SQL Server Integration Services (SSIS)
Microsoft SSIS
SQL
SQL server
Should be able to connect with multiple data sources like excel, csv, SQL server oracle etc.
 Should be able to use the transformations components to transform the data
 Should possess knowledge on incremental load, full load etc.
 Should Design, build and deploy effective packages
 Should be able to schedule these packages through task schedulers
 Implement stored procedures and effectively query a database
 Translate requirements from the business and analyst into technical code
 Identify and test for bugs and bottlenecks in the ETL solution
 Ensure the best possible performance and quality in the packages
 Provide support and fix issues in the packages
 Writes advanced SQL including some query tuning
 Experience in the identification of data quality
 Some database design experience is helpful
 Experience designing and building complete ETL/SSIS processes moving and transforming data for
ODS, Staging, and Data Warehousing
Read more
Computer Power Group Pvt Ltd
Bengaluru (Bangalore), Chennai, Pune, Mumbai
7 - 13 yrs
₹14L - ₹20L / yr
skill iconR Programming
skill iconPython
skill iconData Science
SQL server
Business Analysis
+3 more
Requirement Specifications: Job Title:: Data Scientist Experience:: 7 to 10 Years Work Location:: Mumbai, Bengaluru, Chennai Job Role:: Permanent Notice Period :: Immediate to 60 days Job description: • Support delivery of one or more data science use cases, leading on data discovery and model building activities Conceptualize and quickly build POC on new product ideas - should be willing to work as an individual contributor • Open to learn, implement newer tools\products • Experiment & identify best methods\techniques, algorithms for analytical problems • Operationalize – Work closely with the engineering, infrastructure, service management and business teams to operationalize use cases Essential Skills • Minimum 2-7 years of hands-on experience with statistical software tools: SQL, R, Python • 3+ years’ experience in business analytics, forecasting or business planning with emphasis on analytical modeling, quantitative reasoning and metrics reporting • Experience working with large data sets in order to extract business insights or build predictive models • Proficiency in one or more statistical tools/languages – Python, Scala, R, SPSS or SAS and related packages like Pandas, SciPy/Scikit-learn, NumPy etc. • Good data intuition / analysis skills; sql, plsql knowledge must • Manage and transform variety of datasets to cleanse, join, aggregate the datasets • Hands-on experience running in running various methods like Regression, Random forest, k-NN, k-Means, boosted trees, SVM, Neural Network, text mining, NLP, statistical modelling, data mining, exploratory data analysis, statistics (hypothesis testing, descriptive statistics) • Deep domain (BFSI, Manufacturing, Auto, Airlines, Supply Chain, Retail & CPG) knowledge • Demonstrated ability to work under time constraints while delivering incremental value. • Education Minimum a Masters in Statistics, or PhD in domains linked to applied statistics, applied physics, Artificial Intelligence, Computer Vision etc. BE/BTECH/BSC Statistics/BSC Maths
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort