11+ Data Transformation Services Jobs in India
Apply to 11+ Data Transformation Services Jobs on CutShort.io. Find your next job, effortlessly. Browse Data Transformation Services Jobs and apply today!
Deep-Rooted.Co is on a mission to get Fresh, Clean, Community (Local farmer) produce from harvest to reach your home with a promise of quality first! Our values are rooted in trust, convenience, and dependability, with a bunch of learning & fun thrown in.
Founded out of Bangalore by Arvind, Avinash, Guru and Santosh, with the support of our Investors Accel, Omnivore & Mayfield, we raised $7.5 million in Seed, Series A and Debt funding till date from investors include ACCEL, Omnivore, Mayfield among others. Our brand Deep-Rooted.Co which was launched in August 2020 was the first of its kind as India’s Fruits & Vegetables (F&V) which is present in Bangalore & Hyderabad and on a journey of expansion to newer cities which will be managed seamlessly through Tech platform that has been designed and built to transform the Agri-Tech sector.
Deep-Rooted.Co is committed to building a diverse and inclusive workplace and is an equal-opportunity employer.
How is this possible? It’s because we work with smart people. We are looking for Engineers in Bangalore to work with thehttps://www.linkedin.com/in/gururajsrao/"> Product Leader (Founder) andhttps://www.linkedin.com/in/sriki77/"> CTO and this is a meaningful project for us and we are sure you will love the project as it touches everyday life and is fun. This will be a virtual consultation.
We want to start the conversation about the project we have for you, but before that, we want to connect with you to know what’s on your mind. Do drop a note sharing your mobile number and letting us know when we can catch up.
Purpose of the role:
* As a startup we have data distributed all across various sources like Excel, Google Sheets, Databases etc. We need swift decision making based a on a lot of data that exists as we grow. You help us bring together all this data and put it in a data model that can be used in business decision making. * Handle nuances of Excel and Google Sheets API. * Pull data in and manage it growth, freshness and correctness. * Transform data in a format that aids easy decision-making for Product, Marketing and Business Heads. * Understand the business problem, solve the same using the technology and take it to production - no hand offs - full path to production is yours.
Technical expertise:
* Good Knowledge And Experience with Programming languages - Java, SQL,Python. * Good Knowledge of Data Warehousing, Data Architecture. * Experience with Data Transformations and ETL; * Experience with API tools and more closed systems like Excel, Google Sheets etc. * Experience AWS Cloud Platform and Lambda * Experience with distributed data processing tools. * Experiences with container-based deployments on cloud.
Skills:
Java, SQL, Python, Data Build Tool, Lambda, HTTP, Rest API, Extract Transform Load.
at Vola Finance
Lightning Job By Cutshort⚡
As part of this feature, you can expect status updates about your application and replies within 72 hours (once the screening questions are answered)
Roles & Responsibilities
Basic Qualifications:
● The position requires a four-year degree from an accredited college or university.
● Three years of data engineering / AWS Architecture and security experience.
Top candidates will also have:
Proven/Strong understanding and/or experience in many of the following:-
● Experience designing Scalable AWS architecture.
● Ability to create modern data pipelines and data processing using AWS PAAS components (Glue, etc.) or open source tools (Spark, Hbase, Hive, etc.).
● Ability to develop SQL structures that support high volumes and scalability using
RDBMS such as SQL Server, MySQL, Aurora, etc.
● Ability to model and design modern data structures, SQL/NoSQL databases, Data Lakes, Cloud Data Warehouse
● Experience in creating Network Architecture for secured scalable solution.
● Experience with Message brokers such as Kinesis, Kafka, Rabbitmq, AWS SQS, AWS SNS, and Apache ActiveMQ. Hands-on experience on AWS serverless architectures such as Glue,Lamda, Redshift etc.
● Working knowledge of Load balancers, AWS shield, AWS guard, VPC, Subnets, Network gateway Route53 etc.
● Knowledge of building Disaster management systems and security logs notification system
● Knowledge of building scalable microservice architectures with AWS.
● To create a framework for monthly security checks and wide knowledge on AWS services
● Deploying software using CI/CD tools such CircleCI, Jenkins, etc.
● ML/ AI model deployment and production maintainanace experience is mandatory.
● Experience with API tools such as REST, Swagger, Postman and Assertible.
● Versioning management tools such as github, bitbucket, GitLab.
● Debugging and maintaining software in Linux or Unix platforms.
● Test driven development
● Experience building transactional databases.
● Python, PySpark programming experience .
● Must experience engineering solutions in AWS.
● Working AWS experience, AWS certification is required prior to hiring
● Working in Agile Framework/Kanban Framework
● Must demonstrate solid knowledge of computer science fundamentals like data structures & algorithms.
● Passion for technology and an eagerness to contribute to a team-oriented environment.
● Demonstrated leadership on medium to large-scale projects impacting strategic priorities.
● Bachelor’s degree in Computer science or Electrical engineering or related field is required
● Able to contribute to the gathering of functional requirements, developing technical
specifications, and test case planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● 60% hands-on coding with architecture ownership of one or more products
● Ability to articulate architectural and design options, and educate development teams and
business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Mentor and guide team members
● Work cross-functionally with various bidgely teams including product management, QA/QE,
various product lines, and/or business units to drive forward results
Requirements
● BS/MS in computer science or equivalent work experience
● 8-12 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data EcoSystems.
● Past experience with Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra,
Kafka, Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Ability to lead and mentor technical team members
● Expertise with the entire Software Development Life Cycle (SDLC)
● Excellent communication skills: Demonstrated ability to explain complex technical issues to
both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Business Acumen - strategic thinking & strategy development
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
● Experience with Agile Development, SCRUM, or Extreme Programming methodologies
Role : Sr Data Scientist / Tech Lead – Data Science
Number of positions : 8
Responsibilities
- Lead a team of data scientists, machine learning engineers and big data specialists
- Be the main point of contact for the customers
- Lead data mining and collection procedures
- Ensure data quality and integrity
- Interpret and analyze data problems
- Conceive, plan and prioritize data projects
- Build analytic systems and predictive models
- Test performance of data-driven products
- Visualize data and create reports
- Experiment with new models and techniques
- Align data projects with organizational goals
Requirements (please read carefully)
- Very strong in statistics fundamentals. Not all data is Big Data. The candidate should be able to derive statistical insights from very few data points if required, using traditional statistical methods.
- Msc-Statistics/ Phd.Statistics
- Education – no bar, but preferably from a Statistics academic background (eg MSc-Stats, MSc-Econometrics etc), given the first point
- Strong expertise in Python (any other statistical languages/tools like R, SAS, SPSS etc are just optional, but Python is absolutely essential). If the person is very strong in Python, but has almost nil knowledge in the other statistical tools, he/she will still be considered a good candidate for this role.
- Proven experience as a Data Scientist or similar role, for about 7-8 years
- Solid understanding of machine learning and AI concepts, especially wrt choice of apt candidate algorithms for a use case, and model evaluation.
- Good expertise in writing SQL queries (should not be dependent upon anyone else for pulling in data, joining them, data wrangling etc)
- Knowledge of data management and visualization techniques --- more from a Data Science perspective.
- Should be able to grasp business problems, ask the right questions to better understand the problem breadthwise /depthwise, design apt solutions, and explain that to the business stakeholders.
- Again, the last point above is extremely important --- should be able to identify solutions that can be explained to stakeholders, and furthermore, be able to present them in simple, direct language.
http://www.altimetrik.com/">http://www.altimetrik.com
https://www.youtube.com/watch?v=3nUs4YxppNE&feature=emb_rel_end">https://www.youtube.com/watch?v=3nUs4YxppNE&feature=emb_rel_end
https://www.youtube.com/watch?v=e40r6kJdC8c">https://www.youtube.com/watch?v=e40r6kJdC8c
We’re hiring a talented Data Engineer and Big Data enthusiast to work in our platform to help ensure that our data quality is flawless. As a company, we have millions of new data points every day that come into our system. You will be working with a passionate team of engineers to solve challenging problems and ensure that we can deliver the best data to our customers, on-time. You will be using the latest cloud data warehouse technology to build robust and reliable data pipelines. Duties/Responsibilities Include:
|
Requirements:
Exceptional candidates will have:
|
Skills- Informatica with Big Data Management
1.Minimum 6 to 8 years of experience in informatica BDM development
2.Experience working on Spark/SQL
3.Develops informtica mapping/Sql
Role : Talend developer
Location : Coimbatore
Experience : 4+Years
Skills : Talend, any DB
Notice period : Immediate to 15 Days
The programmer should be proficient in python and should be able to work totally independently. Should also have skill to work with databases and have strong capability to understand how to fetch data from various sources, organise the data and identify useful information through efficient code.
Familiarity with Python
Some examples of work:
Intro
Our data and risk team is the core pillar of our business that harnesses alternative data sources to guide the decisions we make at Rely. The team designs, architects, as well as develop and maintain a scalable data platform the powers our machine learning models. Be part of a team that will help millions of consumers across Asia, to be effortlessly in control of their spending and make better decisions.
What will you do
The data engineer is focused on making data correct and accessible, and building scalable systems to access/process it. Another major responsibility is helping AI/ML Engineers write better code.
• Optimize and automate ingestion processes for a variety of data sources such as: click stream, transactional and many other sources.
- Create and maintain optimal data pipeline architecture and ETL processes
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Develop data pipeline and infrastructure to support real-time decisions
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.
What will you need
• 2+ hands-on experience building and implementation of large scale production pipeline and Data Warehouse
• Experience dealing with large scale
- Proficiency in writing and debugging complex SQLs
- Experience working with AWS big data tools
• Ability to lead the project and implement best data practises and technology
Data Pipelining
- Strong command in building & optimizing data pipelines, architectures and data sets
- Strong command on relational SQL & noSQL databases including Postgres
- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
Big Data: Strong experience in big data tools & applications
- Tools: Hadoop, Spark, HDFS etc
- AWS cloud services: EC2, EMR, RDS, Redshift
- Stream-processing systems: Storm, Spark-Streaming, Flink etc.
- Message queuing: RabbitMQ, Spark etc
Software Development & Debugging
- Strong experience in object-oriented programming/object function scripting languages: Python, Java, C++, Scala, etc
- Strong hold on data structures & algorithms
What would be a bonus
- Prior experience working in a fast-growth Startup
- Prior experience in the payments, fraud, lending, advertising companies dealing with large scale data