Cutshort logo
UAE Client logo
Informatica Big Data Management
UAE Client
Informatica Big Data Management
UAE Client's logo

Informatica Big Data Management

at UAE Client

Agency job
6 - 10 yrs
₹15L - ₹22L / yr
Remote, Bengaluru (Bangalore), Hyderabad
Skills
Informatica
Big Data
SQL
Hadoop
Apache Spark
Spark

Skills- Informatica with Big Data Management

 

1.Minimum 6 to 8 years of experience in informatica BDM development
2.Experience working on Spark/SQL
3.Develops informtica mapping/Sql 

4. Should have experience in Hadoop, spark etc
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

Similar jobs

Startup Focused on simplifying Buying Intent
Startup Focused on simplifying Buying Intent
Agency job
via Qrata by Blessy Fernandes
Bengaluru (Bangalore)
4 - 9 yrs
₹28L - ₹56L / yr
Big Data
Apache Spark
Spark
Hadoop
ETL
+7 more
5+ years of experience in a Data Engineer role.
 Proficiency in Linux.
 Must have SQL knowledge and experience working with relational databases,
query authoring (SQL) as well as familiarity with databases including Mysql,
Mongo, Cassandra, and Athena.
 Must have experience with Python/Scala.
Must have experience with Big Data technologies like Apache Spark.
 Must have experience with Apache Airflow.
 Experience with data pipeline and ETL tools like AWS Glue.
 Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Read more
Sentienz Solutions Private Limited
Mihika Haridas
Posted by Mihika Haridas
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹13L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more

About Us:

Established in the early months of 2016, Sentienz is an IoT, AI, and big data technology company that you can join. A dynamic team of highly skilled data scientists and engineers who specialize in building internet-scale platforms and petabyte-scale digital insights solutions. We are experts in developing state-of-the-art machine learning models as well as advanced analytics platforms.

Sentienz is proud of its flagship product, Sentienz Akiro, an AI-powered connectivity platform. By allowing users to communicate easily across their devices, Akiro revolutionizes consumer engagement. Moreover, it offers essential feedback on customer involvement within your app. In addition to these factors, Akiro enables IoT M2M communication by providing unmatched real-time access.

If you want to be part of the future of AI-driven connectivity and IoT innovation at Sentienz, then join us today!


Position: Senior Data Engineer


Job Specifications:

  • 5+ years of experience in data engineering or a similar role.
  • Proficiency in programming languages such as Scala, Java, and Python.
  • Experience with big data technologies such as Spark.
  • Strong SQL skills and experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra).
  • Hands-on experience with data integration tools like Treeno and Airbyte.
  • Experience with scheduling and workflow automation tools such as Dolphin Scheduler.
  • Familiarity with data visualization tools like Superset.
  • Hands-on experience with cloud platforms (e.g., AWS, Google Cloud, Azure) and their data services.
  • Bachelor/master’s degree in computer science or a related field.
  • Excellent problem-solving skills, attention to detail, and good communication skills.
  • Candidates must be fast learners and adaptable to a fast-paced development environment.


Location: On-site; Bangalore/Bangalore Urban

Availability to Join: Immediate Joiners

Read more
Astegic
at Astegic
3 recruiters
Nikita Pasricha
Posted by Nikita Pasricha
Remote only
5 - 7 yrs
₹8L - ₹15L / yr
Data engineering
SQL
Relational Database (RDBMS)
Big Data
skill iconScala
+14 more

WHAT YOU WILL DO:

  • ●  Create and maintain optimal data pipeline architecture.

  • ●  Assemble large, complex data sets that meet functional / non-functional business requirements.

  • ●  Identify, design, and implement internal process improvements: automating manual processes,

    optimizing data delivery, re-designing infrastructure for greater scalability, etc.

  • ●  Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide

    variety of data sources using Spark,Hadoop and AWS 'big data' technologies.(EC2, EMR, S3, Athena).

  • ●  Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition,

    operational efficiency and other key business performance metrics.

  • ●  Work with stakeholders including the Executive, Product, Data and Design teams to assist with

    data-related technical issues and support their data infrastructure needs.

  • ●  Keep our data separated and secure across national boundaries through multiple data centers and AWS

    regions.

  • ●  Create data tools for analytics and data scientist team members that assist them in building and

    optimizing our product into an innovative industry leader.

  • ●  Work with data and analytics experts to strive for greater functionality in our data systems.

    REQUIRED SKILLS & QUALIFICATIONS:

  • ●  5+ years of experience in a Data Engineer role.

  • ●  Advanced working SQL knowledge and experience working with relational databases, query authoring

    (SQL) as well as working familiarity with a variety of databases.

  • ●  Experience building and optimizing 'big data' data pipelines, architectures and data sets.

  • ●  Experience performing root cause analysis on internal and external data and processes to answer

    specific business questions and identify opportunities for improvement.

  • ●  Strong analytic skills related to working with unstructured datasets.

  • ●  Build processes supporting data transformation, data structures, metadata, dependency and workload

    management.

  • ●  A successful history of manipulating, processing and extracting value from large disconnected datasets.

  • ●  Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores.

  • ●  Strong project management and organizational skills.

  • ●  Experience supporting and working with cross-functional teams in a dynamic environment

  • ●  Experience with big data tools: Hadoop, Spark, Pig, Vetica, etc.

  • ●  Experience with AWS cloud services: EC2, EMR, S3, Athena

  • ●  Experience with Linux

  • ●  Experience with object-oriented/object function scripting languages: Python, Java, Shell, Scala, etc.


    PREFERRED SKILLS & QUALIFICATIONS:

● Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.

Read more
Propellor.ai
at Propellor.ai
5 candid answers
1 video
Anila Nair
Posted by Anila Nair
Remote only
2 - 5 yrs
₹5L - ₹15L / yr
SQL
API
skill iconPython
Spark

Job Description - Data Engineer

About us
Propellor is aimed at bringing Marketing Analytics and other Business Workflows to the Cloud ecosystem. We work with International Clients to make their Analytics ambitions come true, by deploying the latest tech stack and data science and engineering methods, making their business data insightful and actionable. 

 

What is the role?
This team is responsible for building a Data Platform for many different units. This platform will be built on Cloud and therefore in this role, the individual will be organizing and orchestrating different data sources, and
giving recommendations on the services that fulfil goals based on the type of data

Qualifications:

• Experience with Python, SQL, Spark
• Knowledge/notions of JavaScript
• Knowledge of data processing, data modeling, and algorithms
• Strong in data, software, and system design patterns and architecture
• API building and maintaining
• Strong soft skills, communication
Nice to have:
• Experience with cloud: Google Cloud Platform, AWS, Azure
• Knowledge of Google Analytics 360 and/or GA4.
Key Responsibilities
• Work on the core backend and ensure it meets the performance benchmarks.
• Designing and developing APIs for the front end to consume.
• Constantly improve the architecture of the application by clearing the technical backlog.
• Meeting both technical and consumer needs.
• Staying abreast of developments in web applications and programming languages.

Key Responsibilities
• Design and develop platform based on microservices architecture.
• Work on the core backend and ensure it meets the performance benchmarks.
• Work on the front end with ReactJS.
• Designing and developing APIs for the front end to consume.
• Constantly improve the architecture of the application by clearing the technical backlog.
• Meeting both technical and consumer needs.
• Staying abreast of developments in web applications and programming languages.

What are we looking for?
An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of it. We are open to promising candidates who are passionate about their work and are team players.
• Education - BE/MCA or equivalent.
• Agnostic/Polyglot with multiple tech stacks.
• Worked on open-source technologies – NodeJS, ReactJS, MySQL, NoSQL, MongoDB, DynamoDB.
• Good experience with Front-end technologies like ReactJS.
• Backend exposure – good knowledge of building API.
• Worked on serverless technologies.
• Efficient in building microservices in combining server & front-end.
• Knowledge of cloud architecture.
• Should have sound working experience with relational and columnar DB.
• Should be innovative and communicative in approach.
• Will be responsible for the functional/technical track of a project.

Whom will you work with?
You will closely work with the engineering team and support the Product Team.

Hiring Process includes : 

a. Written Test on Python and SQL

b. 2 - 3 rounds of Interviews

Immediate Joiners will be preferred

Read more
Fragma Data Systems
at Fragma Data Systems
8 recruiters
Evelyn Charles
Posted by Evelyn Charles
Remote, Bengaluru (Bangalore), Hyderabad
3 - 9 yrs
₹8L - ₹20L / yr
PySpark
Data engineering
Data Engineer
Windows Azure
ADF
+2 more
Must-Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skill
 
 
Technology Skills (Good to Have):
  • Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
  • Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. 
  • Designing and implementing data engineering, ingestion, and transformation functions
  • Azure Synapse or Azure SQL data warehouse
  • Spark on Azure is available in HD insights and data bricks
 
Good to Have: 
  • Experience with Azure Analysis Services
  • Experience in Power BI
  • Experience with third-party solutions like Attunity/Stream sets, Informatica
  • Experience with PreSales activities (Responding to RFPs, Executing Quick POCs)
  • Capacity Planning and Performance Tuning on Azure Stack and Spark.
Read more
Consulting Leader
Consulting Leader
Agency job
via Buaut Tech by KAUSHANK nalin
Pune, Mumbai
8 - 10 yrs
₹8L - ₹16L / yr
Data integration
talend
Hadoop
Integration
skill iconJava
+1 more

 

Job Description for :

Role: Data/Integration Architect

Experience – 8-10 Years

Notice Period: Under 30 days

Key Responsibilities: Designing, Developing frameworks for batch and real time jobs on Talend. Leading migration of these jobs from Mulesoft to Talend, maintaining best practices for the team, conducting code reviews and demos.

Core Skillsets:

Talend Data Fabric - Application, API Integration, Data Integration. Knowledge on Talend Management Cloud, deployment and scheduling of jobs using TMC or Autosys.

Programming Languages - Python/Java
Databases: SQL Server, Other Databases, Hadoop

Should have worked on Agile

Sound communication skills

Should be open to learning new technologies based on business needs on the job

Additional Skills:

Awareness of other data/integration platforms like Mulesoft, Camel

Awareness Hadoop, Snowflake, S3

Read more
App-based lending platform. ( AF1)
App-based lending platform. ( AF1)
Agency job
via Multi Recruit by Ayub Pasha
Bengaluru (Bangalore)
2 - 5 yrs
₹15L - ₹20L / yr
Product Analyst
skill iconData Analytics
SQL
Business Design
Data analystics
+2 more
  • Product Analytics: This is the first and most obvious role of the Product Analyst. At this capacity, the Product Analyst is responsible for the development and delivery of tangible consumer benefits through the product or service of the business.
  • In addition, in this capacity, the Product Analyst is also responsible for measuring and monitoring the product or service’s performance as well as presenting product-related consumer, market, and competitive intelligence.
  • Product Strategy: As a member of the Product team, the Product Analyst is responsible for the development and proposal of product strategies.
  • Product Management Operations: The Product Analyst also has the obligation to respond in a timely manner to all requests and inquiries for product information or changes. He also performs the initial product analysis in order to assess the need for any requested changes as well as their potential impact.
  • At this capacity, the Product Analyst also undertakes financial modeling on the products or services of the business as well as of the target markets in order to bring about an understanding of the relations between the product and the target market. This information is presented to the Marketing Manager and other stakeholders, when necessary.
  • Additionally, the Product Analyst produces reports and makes recommendations to the Product Manager and Product Marketing Manager to be used as guidance in decision-making pertaining to the business’s new as well as existent products.
  • Initiative: In this capacity, the Product Analyst ensures that there is a good flow of communication between the Product team and other teams. The Product Analyst ensures this by actively participating in team meetings and keeping everyone up to date.
  • Pricing and Development: The Product Analyst has the responsibility to monitor the market, competitor activities, as well as any price movements and make recommendations that will be used in key decision making. In this function, the Product Analyst will normally liaise with other departments such as the credit/risk in the business in order to enhance and increase the efficiency of effecting price changes in accordance with market shifts.
  • Customer/Market Intelligence: The Product Analyst has the obligation to drive consumer intelligence through the development of external and internal data sources that improve the business’s understanding of the product’s market, competitor activities, and consumer activities.
  • In the performance of this role, the Product Analyst develops or adopts research tools, sources, and methods that further support and contribute to the business’s product.
Read more
A Product development Organisation
A Product development Organisation
Agency job
via Millions Advisory by Vasuki N
Pune
5 - 8 yrs
₹10L - ₹17L / yr
skill iconPython
Big Data
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
+3 more
  • Must have 5-8 years of experience in handling data
  • Must have the ability to interpret large amounts of data and to multi-task
  • Must have strong knowledge of and experience with programming (Python), Linux/Bash scripting, databases(SQL, etc)
  • Must have strong analytical and critical thinking to resolve business problems using data and tech
  •  Must have domain familiarity and interest of – Cloud technologies (GCP/Azure Microsoft/ AWS Amazon), open-source technologies, Enterprise technologies
  • Must have the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.
  • Must have good communication skills
  • Working knowledge/exposure to ElasticSearch, PostgreSQL, Athena, PrestoDB, Jupyter Notebook
Read more
Artivatic
at Artivatic
1 video
3 recruiters
Layak Singh
Posted by Layak Singh
Bengaluru (Bangalore)
3 - 10 yrs
₹6L - ₹12L / yr
skill iconPython
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Natural Language Processing (NLP)
TensorFlow
+3 more
Responsibilities :- Define the short-term tactics and long-term technology strategy.- Communicate that technical vision to technical and non-technical partners, customers and investors.- Lead the development of AI/ML related products as it matures into lean, high performing agile teams.- Scale the AI/ML teams by finding and hiring the right mix of on-shore and off-shore resources.- Work collaboratively with the business, partners, and customers to consistently deliver business value.- Own the vision and execution of developing and integrating AI & machine learning into all aspects of the platform.- Drive innovation through the use of technology and unique ways of applying it to business problems.Experience and Qualifications :- Masters or Ph.D. in AI, computer science, ML, electrical engineering or related fields (statistics, applied math, computational neuroscience)- Relevant experience leading & building teams establishing technical direction- A well-developed portfolio of past software development, composed of some mixture of professional work, open source contributions, and personal projects.- Experience in leading and developing remote and distributed teams- Think strategically and apply that through to innovative solutions- Experience with cloud infrastructure- Experience working with machine learning, artificial intelligence, and large datasets to drive insights and business value- Experience in agents architecture, deep learning, neural networks, computer vision and NLP- Experience with distributed computational frameworks (YARN, Spark, Hadoop)- Proficiency in Python, C++. Familiarity with DL frameworks (e.g. neon, TensorFlow, Caffe, etc.)Personal Attributes :- Excellent communication skills- Strong fit with the culture- Hands-on approach, self-motivated with a strong work ethic- Ability to learn quickly (technology, business models, target industries)- Creative and inspired.Superpowers we love :- Entrepreneurial spirit and a vibrant personality- Experience with lean startup build-measure-learn cycle- Vision for AI- Extensive understanding of why things are done the way they are done in agile development.- A passion for adding business valueNote: Selected candidate will be offered ESOPs too.Employment Type : Full TimeSalary : 8-10 Lacs + ESOPFunction : Systems/Product SoftwareExperience : 3 - 10 Years
Read more
It is India’s biggest vernacular e-sports gaming platform.
It is India’s biggest vernacular e-sports gaming platform.
Agency job
via Accord Manpower Services by Silky Malik
NCR (Delhi | Gurgaon | Noida)
3 - 7 yrs
₹12L - ₹34L / yr
skill iconMachine Learning (ML)
Data Structures
Data engineering
Big Data
Neural networks
• Experience with Big Data, Neural network (deep learning), and reinforcement learning • Ability to design machine learning systems • Research and implement appropriate ML algorithms and tools • Develop machine learning applications according to requirements • Select appropriate datasets and data representation methods • Run machine learning tests and experiments • Perform statistical analysis and fine-tuning using test results • Extend existing ML libraries and frameworks • Keep abreast of developments in the field • Understanding of data structures, data modeling and software architecture • Deep knowledge of math, probability, statistics and algorithms • Ability to write robust code in Python, Java and R Familiarity with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn)
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos