Technical/Core skills
- Minimum 3 yrs of exp in Informatica Big data Developer(BDM) in Hadoop environment.
- Have knowledge of informatica Power exchange (PWX).
- Minimum 3 yrs of exp in big data querying tool like Hive and Impala.
- Ability to designing/development of complex mappings using informatica Big data Developer.
- Create and manage Informatica power exchange and CDC real time implementation
- Strong Unix knowledge skills for writing shell scripts and troubleshoot of existing scripts.
- Good knowledge of big data platforms and its framework.
- Good to have an experience in cloudera data platform (CDP)
- Experience with building stream processing systems using Kafka and spark
- Excellent SQL knowledge
Soft skills :
- Ability to work independently
- Strong analytical and problem solving skills
- Attitude of learning new technology
- Regular interaction with vendors, partners and stakeholders
About GradMener Technology Pvt. Ltd.
Similar jobs
Qualifications & Skills
- Proven track record in delivering Data Governance Solutions to a large enterprise
- Knowledge experience in data governance frameworks, formulating data governance policy, standards and processes
- Experience in program management and managing cross functional stakeholders from senior leadership to project manager level
- Experience in leading a team of data governance business analysts
- Experience in data governance tools like Informatica Data Quality, Enterprise Data Catalog, Axon, Collibra
- Experience in metadata management, master and reference data management, data quality and data governance
Should design and operate data pipe lines.
Build and manage analytics platform using Elastic search, Redshift, Mongo db.
Strong programming fundamentals in Datastructures and algorithms.
• Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc)
• should have good hands-on Spark (spark with java/PySpark)
• Hive
• must be good with SQL's(spark SQL/ HiveQL)
• Application design, software development and automated testing
Environment Experience:
• Experience with implementing integrated automated release management using tools/technologies/frameworks like Maven, Git, code/security review tools, Jenkins, Automated testing, and Junit.
• Demonstrated experience with Agile or other rapid application development methods
• Cloud development (AWS/Azure/GCP)
• Unix / Shell scripting
• Web services , open API development, and REST concepts
Most companies try to meet expectations, dunnhumby exists to defy them. Using big data, deep expertise and AI-driven platforms to decode the 21st century human experience – then redefine it in meaningful and surprising ways that put customers first. Across digital, mobile and retail. For brands like Tesco, Coca-Cola, Procter & Gamble and PepsiCo.
We’re looking for an Applied Data Scientist who expects more from their career. It’s a chance to apply your expertise to distil complex problems into compelling insights using the best of machine learning and human creativity to deliver effective and impactful solutions for clients. Joining our advanced data science team, you’ll investigate, develop, implement and deploy a range of complex applications and components while working alongside super-smart colleagues challenging and rewriting the rules, not just following them.
What we expect from you
- Degree in Statistics, Maths, Physics, Economics or similar field
- Programming skills (Python and SQL are a must have)
- Analytical Techniques and Technology
- Experience with and passion for connecting your work directly to the customer experience, making a real and tangible impact.
- Logical thinking and problem solving
- Strong communication skills
- Statistical Modelling and experience of applying data science into client problems
- 2 to 5 years of experience required
What you can expect from us
We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not.
Plus, thoughtful perks, like early finish Friday and your birthday off.
You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn.
And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Women’s Network, dh Proud, dh Parent’s & Carer’s, dh One and dh Thrive as the living proof. Everyone’s invited.
Our approach to Flexible Working
At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work.
We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process.
Job Summary
SQL development for our Enterprise Resource Planning (ERP) Product offered to SMEs. Regular modifications , creation and validation with testing of stored procedures , views, functions on MS SQL Server.
Responsibilities and Duties
Understanding the ERP Software and use cases.
Regular Creation,modifications and testing of
- Stored Procedures
- Views
- Functions
- Nested Queries
- Table and Schema Designs
Qualifications and Skills
MS SQL
- Procedural Language
- Datatypes
- Objects
- Databases
- Schema
- Strong Python Coding skills and OOP skills
- Should have worked on Big Data product Architecture
- Should have worked with any one of the SQL-based databases like MySQL, PostgreSQL and any one of
- NoSQL-based databases such as Cassandra, Elasticsearch etc.
- Hands on experience on frameworks like Spark RDD, DataFrame, Dataset
- Experience on development of ETL for data product
- Candidate should have working knowledge on performance optimization, optimal resource utilization, Parallelism and tuning of spark jobs
- Working knowledge on file formats: CSV, JSON, XML, PARQUET, ORC, AVRO
- Good to have working knowledge with any one of the Analytical Databases like Druid, MongoDB, Apache Hive etc.
- Experience to handle real-time data feeds (good to have working knowledge on Apache Kafka or similar tool)
- Python and Scala (Optional), Spark / PySpark, Parallel programming
Strong experience in Scala/Spark
End client: Sapient
Mode of Hiring : FTE
Notice should be less than 30days
The programmer should be proficient in python and should be able to work totally independently. Should also have skill to work with databases and have strong capability to understand how to fetch data from various sources, organise the data and identify useful information through efficient code.
Familiarity with Python
Some examples of work:
JD:
Required Skills:
- Intermediate to Expert level hands-on programming using one of programming language- Java or Python or Pyspark or Scala.
- Strong practical knowledge of SQL.
Hands on experience on Spark/SparkSQL - Data Structure and Algorithms
- Hands-on experience as an individual contributor in Design, Development, Testing and Deployment of Big Data technologies based applications
- Experience in Big Data application tools, such as Hadoop, MapReduce, Spark, etc
- Experience on NoSQL Databases like HBase, etc
- Experience with Linux OS environment (Shell script, AWK, SED)
- Intermediate RDBMS skill, able to write SQL query with complex relation on top of big RDMS (100+ table)