![UAE Client's logo](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fdefault_company_picture.jpg&w=3840&q=75)
Skills- Informatica with Big Data Management
1.Minimum 6 to 8 years of experience in informatica BDM development
2.Experience working on Spark/SQL
3.Develops informtica mapping/Sql
![companies logos](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fhiring_companies_logos-v2.webp&w=3840&q=80)
Similar jobs
Proficiency in Linux.
Must have SQL knowledge and experience working with relational databases,
query authoring (SQL) as well as familiarity with databases including Mysql,
Mongo, Cassandra, and Athena.
Must have experience with Python/Scala.
Must have experience with Big Data technologies like Apache Spark.
Must have experience with Apache Airflow.
Experience with data pipeline and ETL tools like AWS Glue.
Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
About Us:
Established in the early months of 2016, Sentienz is an IoT, AI, and big data technology company that you can join. A dynamic team of highly skilled data scientists and engineers who specialize in building internet-scale platforms and petabyte-scale digital insights solutions. We are experts in developing state-of-the-art machine learning models as well as advanced analytics platforms.
Sentienz is proud of its flagship product, Sentienz Akiro, an AI-powered connectivity platform. By allowing users to communicate easily across their devices, Akiro revolutionizes consumer engagement. Moreover, it offers essential feedback on customer involvement within your app. In addition to these factors, Akiro enables IoT M2M communication by providing unmatched real-time access.
If you want to be part of the future of AI-driven connectivity and IoT innovation at Sentienz, then join us today!
Position: Senior Data Engineer
Job Specifications:
- 5+ years of experience in data engineering or a similar role.
- Proficiency in programming languages such as Scala, Java, and Python.
- Experience with big data technologies such as Spark.
- Strong SQL skills and experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra).
- Hands-on experience with data integration tools like Treeno and Airbyte.
- Experience with scheduling and workflow automation tools such as Dolphin Scheduler.
- Familiarity with data visualization tools like Superset.
- Hands-on experience with cloud platforms (e.g., AWS, Google Cloud, Azure) and their data services.
- Bachelor/master’s degree in computer science or a related field.
- Excellent problem-solving skills, attention to detail, and good communication skills.
- Candidates must be fast learners and adaptable to a fast-paced development environment.
Location: On-site; Bangalore/Bangalore Urban
Availability to Join: Immediate Joiners
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fscala.png&w=32&q=75)
WHAT YOU WILL DO:
-
● Create and maintain optimal data pipeline architecture.
-
● Assemble large, complex data sets that meet functional / non-functional business requirements.
-
● Identify, design, and implement internal process improvements: automating manual processes,
optimizing data delivery, re-designing infrastructure for greater scalability, etc.
-
● Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide
variety of data sources using Spark,Hadoop and AWS 'big data' technologies.(EC2, EMR, S3, Athena).
-
● Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition,
operational efficiency and other key business performance metrics.
-
● Work with stakeholders including the Executive, Product, Data and Design teams to assist with
data-related technical issues and support their data infrastructure needs.
-
● Keep our data separated and secure across national boundaries through multiple data centers and AWS
regions.
-
● Create data tools for analytics and data scientist team members that assist them in building and
optimizing our product into an innovative industry leader.
-
● Work with data and analytics experts to strive for greater functionality in our data systems.
REQUIRED SKILLS & QUALIFICATIONS:
-
● 5+ years of experience in a Data Engineer role.
-
● Advanced working SQL knowledge and experience working with relational databases, query authoring
(SQL) as well as working familiarity with a variety of databases.
-
● Experience building and optimizing 'big data' data pipelines, architectures and data sets.
-
● Experience performing root cause analysis on internal and external data and processes to answer
specific business questions and identify opportunities for improvement.
-
● Strong analytic skills related to working with unstructured datasets.
-
● Build processes supporting data transformation, data structures, metadata, dependency and workload
management.
-
● A successful history of manipulating, processing and extracting value from large disconnected datasets.
-
● Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores.
-
● Strong project management and organizational skills.
-
● Experience supporting and working with cross-functional teams in a dynamic environment
-
● Experience with big data tools: Hadoop, Spark, Pig, Vetica, etc.
-
● Experience with AWS cloud services: EC2, EMR, S3, Athena
-
● Experience with Linux
-
● Experience with object-oriented/object function scripting languages: Python, Java, Shell, Scala, etc.
PREFERRED SKILLS & QUALIFICATIONS:
● Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
Job Description - Data Engineer
About us
Propellor is aimed at bringing Marketing Analytics and other Business Workflows to the Cloud ecosystem. We work with International Clients to make their Analytics ambitions come true, by deploying the latest tech stack and data science and engineering methods, making their business data insightful and actionable.
What is the role?
This team is responsible for building a Data Platform for many different units. This platform will be built on Cloud and therefore in this role, the individual will be organizing and orchestrating different data sources, and
giving recommendations on the services that fulfil goals based on the type of data
Qualifications:
• Experience with Python, SQL, Spark
• Knowledge/notions of JavaScript
• Knowledge of data processing, data modeling, and algorithms
• Strong in data, software, and system design patterns and architecture
• API building and maintaining
• Strong soft skills, communication
Nice to have:
• Experience with cloud: Google Cloud Platform, AWS, Azure
• Knowledge of Google Analytics 360 and/or GA4.
Key Responsibilities
• Work on the core backend and ensure it meets the performance benchmarks.
• Designing and developing APIs for the front end to consume.
• Constantly improve the architecture of the application by clearing the technical backlog.
• Meeting both technical and consumer needs.
• Staying abreast of developments in web applications and programming languages.
Key Responsibilities
• Design and develop platform based on microservices architecture.
• Work on the core backend and ensure it meets the performance benchmarks.
• Work on the front end with ReactJS.
• Designing and developing APIs for the front end to consume.
• Constantly improve the architecture of the application by clearing the technical backlog.
• Meeting both technical and consumer needs.
• Staying abreast of developments in web applications and programming languages.
What are we looking for?
An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of it. We are open to promising candidates who are passionate about their work and are team players.
• Education - BE/MCA or equivalent.
• Agnostic/Polyglot with multiple tech stacks.
• Worked on open-source technologies – NodeJS, ReactJS, MySQL, NoSQL, MongoDB, DynamoDB.
• Good experience with Front-end technologies like ReactJS.
• Backend exposure – good knowledge of building API.
• Worked on serverless technologies.
• Efficient in building microservices in combining server & front-end.
• Knowledge of cloud architecture.
• Should have sound working experience with relational and columnar DB.
• Should be innovative and communicative in approach.
• Will be responsible for the functional/technical track of a project.
Whom will you work with?
You will closely work with the engineering team and support the Product Team.
Hiring Process includes :
a. Written Test on Python and SQL
b. 2 - 3 rounds of Interviews
Immediate Joiners will be preferred
- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
- Experience in migrating on-premise data warehouses to data platforms on AZURE cloud.
- Designing and implementing data engineering, ingestion, and transformation functions
-
Azure Synapse or Azure SQL data warehouse
-
Spark on Azure is available in HD insights and data bricks
- Experience with Azure Analysis Services
- Experience in Power BI
- Experience with third-party solutions like Attunity/Stream sets, Informatica
- Experience with PreSales activities (Responding to RFPs, Executing Quick POCs)
- Capacity Planning and Performance Tuning on Azure Stack and Spark.
Job Description for :
Role: Data/Integration Architect
Experience – 8-10 Years
Notice Period: Under 30 days
Key Responsibilities: Designing, Developing frameworks for batch and real time jobs on Talend. Leading migration of these jobs from Mulesoft to Talend, maintaining best practices for the team, conducting code reviews and demos.
Core Skillsets:
Talend Data Fabric - Application, API Integration, Data Integration. Knowledge on Talend Management Cloud, deployment and scheduling of jobs using TMC or Autosys.
Programming Languages - Python/Java
Databases: SQL Server, Other Databases, Hadoop
Should have worked on Agile
Sound communication skills
Should be open to learning new technologies based on business needs on the job
Additional Skills:
Awareness of other data/integration platforms like Mulesoft, Camel
Awareness Hadoop, Snowflake, S3
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fdata_analytics.png&w=32&q=75)
- Product Analytics: This is the first and most obvious role of the Product Analyst. At this capacity, the Product Analyst is responsible for the development and delivery of tangible consumer benefits through the product or service of the business.
- In addition, in this capacity, the Product Analyst is also responsible for measuring and monitoring the product or service’s performance as well as presenting product-related consumer, market, and competitive intelligence.
- Product Strategy: As a member of the Product team, the Product Analyst is responsible for the development and proposal of product strategies.
- Product Management Operations: The Product Analyst also has the obligation to respond in a timely manner to all requests and inquiries for product information or changes. He also performs the initial product analysis in order to assess the need for any requested changes as well as their potential impact.
- At this capacity, the Product Analyst also undertakes financial modeling on the products or services of the business as well as of the target markets in order to bring about an understanding of the relations between the product and the target market. This information is presented to the Marketing Manager and other stakeholders, when necessary.
- Additionally, the Product Analyst produces reports and makes recommendations to the Product Manager and Product Marketing Manager to be used as guidance in decision-making pertaining to the business’s new as well as existent products.
- Initiative: In this capacity, the Product Analyst ensures that there is a good flow of communication between the Product team and other teams. The Product Analyst ensures this by actively participating in team meetings and keeping everyone up to date.
- Pricing and Development: The Product Analyst has the responsibility to monitor the market, competitor activities, as well as any price movements and make recommendations that will be used in key decision making. In this function, the Product Analyst will normally liaise with other departments such as the credit/risk in the business in order to enhance and increase the efficiency of effecting price changes in accordance with market shifts.
- Customer/Market Intelligence: The Product Analyst has the obligation to drive consumer intelligence through the development of external and internal data sources that improve the business’s understanding of the product’s market, competitor activities, and consumer activities.
- In the performance of this role, the Product Analyst develops or adopts research tools, sources, and methods that further support and contribute to the business’s product.
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
- Must have 5-8 years of experience in handling data
- Must have the ability to interpret large amounts of data and to multi-task
- Must have strong knowledge of and experience with programming (Python), Linux/Bash scripting, databases(SQL, etc)
- Must have strong analytical and critical thinking to resolve business problems using data and tech
- Must have domain familiarity and interest of – Cloud technologies (GCP/Azure Microsoft/ AWS Amazon), open-source technologies, Enterprise technologies
- Must have the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.
- Must have good communication skills
- Working knowledge/exposure to ElasticSearch, PostgreSQL, Athena, PrestoDB, Jupyter Notebook
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fmachine_learning.png&w=32&q=75)
![It is India’s biggest vernacular e-sports gaming platform.](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fdefault_company_picture.jpg&w=256&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fmachine_learning.png&w=32&q=75)
![icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fsearch.png&w=48&q=75)
![companies logos](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fhiring_companies_logos-v2.webp&w=3840&q=80)