11+ Ab Initio Jobs in India
Apply to 11+ Ab Initio Jobs on CutShort.io. Find your next job, effortlessly. Browse Ab Initio Jobs and apply today!
We are looking for a Senior Database Developer to provide a senior-level contribution to design, develop and implement critical business enterprise applications for marketing systems
- Play a lead role in developing, deploying and managing our databases (Oracle, My SQL and Mongo) on Public Clouds.
- Design and develop PL/SQL processes to perform complex ETL processes.
- Develop UNIX and Perl scripts for data auditing and automation.
- Responsible for database builds and change requests.
- Holistically define the overall reference architecture and manage its overall implementation in the production systems.
- Identify architecture gaps that can improve availability, performance and security for both productions systems and database systems and works towards resolving those issues.
- Work closely with Engineering, Architecture, Business and Operations teams to provide necessary and continuous feedback.
- Automate all the manual steps for the database platform.
- Deliver solutions for access management, availability, security, replication and patching.
- Troubleshoot application database performance issues.
- Participate in daily huddles (30 min.) to collaborate with onshore and offshore teams.
Qualifications:
- 5+ years of experience in database development.
- Bachelor’s degree in Computer Science, Computer Engineering, Math, or similar.
- Experience using ETL tools (Talend or Ab Initio a plus).
- Experience with relational database programming, processing and tuning (Oracle, PL/SQL, My SQL, MS SQL Server, SQL, TSQL).
- Familiarity with BI tools (Cognos, Tableau, etc.).
- Experience with Cloud technology (AWS, etc.).
- Agile or Waterfall methodology experience preferred.
- Experience with API integration.
- Advanced software development and scripting skills for use in automation and interfacing with databases.
- Knowledge of software development lifecycles and methodologies.
- Knowledge of developing procedures, packages and functions in a DW environment.
- Knowledge of UNIX, Linux and Service Oriented Architecture (SOA).
- Ability to multi-task, to work under pressure, and think analytically.
- Ability to work with minimal supervision and meet deadlines.
- Ability to write technical specifications and documents.
- Ability to communicate effectively with individuals at all levels in the company and with various business contacts outside of the company in an articulate, professional manner.
- Knowledge of CDP, CRM, MDM and Business Intelligence is a plus.
- Flexible work hours.
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.
at Delivery Solutions
About UPS:
Moving our world forward by delivering what matters! UPS is a company with a proud past and an even brighter future. Our values define us. Our culture differentiates us. Our strategy drives us. At UPS we are customer first, people led and innovation driven. UPS’s India based Technology Development Centers will bring UPS one step closer to creating a global technology workforce that will help accelerate our digital journey and help us engineer technology solutions that drastically improve our competitive advantage in the field of Logistics.
‘Future You’ grows as a visible and valued Technology professional with UPS, driving us towards an exciting tomorrow. As a global Technology organization we can put serious resources behind your development. If you are solutions orientated, UPS Technology is the place for you. ‘Future You’ delivers ground-breaking solutions to some of the biggest logistics challenges around the globe. You’ll take technology to unimaginable places and really make a difference for UPS and our customers.
Job Summary:
This position participates in the support of batch and real-time data pipelines utilizing various data analytics processing frameworks in support of data science practices for Marketing and Finance business units. This position supports the integration of data from various data sources, as well as performs extract, transform, load (ETL) data conversions, and facilitates data cleansing and enrichment. This position performs full systems life cycle management activities, such as analysis, technical requirements, design, coding, testing, implementation of systems and applications software. This position participates and contributes to synthesizing disparate data sources to support reusable and reproducible data assets.
RESPONSIBILITIES
• Supervises and supports data engineering projects and builds solutions by leveraging a strong foundational knowledge in software/application development. He/she is hands on.
• Develops and delivers data engineering documentation.
• Gathers requirements, defines the scope, and performs the integration of data for data engineering projects.
• Recommends analytic reporting products/tools and supports the adoption of emerging technology.
• Performs data engineering maintenance and support.
• Provides the implementation strategy and executes backup, recovery, and technology solutions to perform analysis.
• Performs ETL tool capabilities with the ability to pull data from various sources and perform a load of the transformed data into a database or business intelligence platform.
• Codes using programming language used for statistical analysis and modeling such as Python/Spark
REQUIRED QUALIFICATIONS:
• Literate in the programming languages used for statistical modeling and analysis, data warehousing and Cloud solutions, and building data pipelines.
• Proficient in developing notebooks in Data bricks using Python and Spark and Spark SQL.
• Strong understanding of a cloud services platform (e.g., GCP, or AZURE, or AWS) and all the data life cycle stages. Azure is preferred
• Proficient in using Azure Data Factory and other Azure features such as LogicApps.
• Preferred to have knowledge of Delta lake, Lakehouse and Unity Catalog concepts.
• Strong understanding of cloud-based data lake systems and data warehousing solutions.
• Has used AGILE concepts for development, including KANBAN and Scrums
• Strong understanding of the data interconnections between organizations’ operational and business functions.
• Strong understanding of the data life cycle stages - data collection, transformation, analysis, storing the data securely, providing data accessibility
• Strong understanding of the data environment to ensure that it can scale for the following demands: Throughput of data, increasing data pipeline throughput, analyzing large amounts of data, Real-time predictions, insights and customer feedback, data security, data regulations, and compliance.
• Strong knowledge of algorithms and data structures, as well as data filtering and data optimization.
• Strong understanding of analytic reporting technologies and environments (e.g., Power BI, Looker, Qlik, etc.)
• Understanding of distributed systems and the underlying business problem being addressed, as well as guides team members on how their work will assist by performing data analysis and presenting findings to the stakeholders. •
REQUIRED SKILLS:
3 years of experience with Databricks, Apache Spark, Python, and SQL
Preferred SKILLS:
DeltaLake Unity Catalog, R, Scala, Azure Logic Apps, Cloud Services Platform (e.g., GCP, or AZURE, or AWS), and AGILE concepts.
Qualifications :
- Minimum 2 years of .NET development experience (ASP.Net 3.5 or greater and C# 4 or greater).
- Good knowledge of MVC, Entity Framework, and Web API/WCF.
- ASP.NET Core knowledge is preferred.
- Creating APIs / Using third-party APIs
- Working knowledge of Angular is preferred.
- Knowledge of Stored Procedures and experience with a relational database (MSSQL 2012 or higher).
- Solid understanding of object-oriented development principles
- Working knowledge of web, HTML, CSS, JavaScript, and the Bootstrap framework
- Strong understanding of object-oriented programming
- Ability to create reusable C# libraries
- Must be able to write clean comments, readable C# code, and the ability to self-learn.
- Working knowledge of GIT
Qualities required :
Over above tech skill we prefer to have
- Good communication and Time Management Skill.
- Good team player and ability to contribute on a individual basis.
- We provide the best learning and growth environment for candidates.
Skills:
NET Core
.NET Framework
ASP.NET Core
ASP.NET MVC
ASP.NET Web API
C#
HTML
Proficiency in Linux.
Must have SQL knowledge and experience working with relational databases,
query authoring (SQL) as well as familiarity with databases including Mysql,
Mongo, Cassandra, and Athena.
Must have experience with Python/Scala.
Must have experience with Big Data technologies like Apache Spark.
Must have experience with Apache Airflow.
Experience with data pipeline and ETL tools like AWS Glue.
Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
-
Fix issues with plugins for our Python-based ETL pipelines
-
Help with automation of standard workflow
-
Deliver Python microservices for provisioning and managing cloud infrastructure
-
Responsible for any refactoring of code
-
Effectively manage challenges associated with handling large volumes of data working to tight deadlines
-
Manage expectations with internal stakeholders and context-switch in a fast-paced environment
-
Thrive in an environment that uses AWS and Elasticsearch extensively
-
Keep abreast of technology and contribute to the engineering strategy
-
Champion best development practices and provide mentorship to others
-
First and foremost you are a Python developer, experienced with the Python Data stack
-
You love and care about data
-
Your code is an artistic manifest reflecting how elegant you are in what you do
-
You feel sparks of joy when a new abstraction or pattern arises from your code
-
You support the manifests DRY (Don’t Repeat Yourself) and KISS (Keep It Short and Simple)
-
You are a continuous learner
-
You have a natural willingness to automate tasks
-
You have critical thinking and an eye for detail
-
Excellent ability and experience of working to tight deadlines
-
Sharp analytical and problem-solving skills
-
Strong sense of ownership and accountability for your work and delivery
-
Excellent written and oral communication skills
-
Mature collaboration and mentoring abilities
-
We are keen to know your digital footprint (community talks, blog posts, certifications, courses you have participated in or you are keen to, your personal projects as well as any kind of contributions to the open-source communities if any)
-
Delivering complex software, ideally in a FinTech setting
-
Experience with CI/CD tools such as Jenkins, CircleCI
-
Experience with code versioning (git / mercurial / subversion)
Deep-Rooted.Co is on a mission to get Fresh, Clean, Community (Local farmer) produce from harvest to reach your home with a promise of quality first! Our values are rooted in trust, convenience, and dependability, with a bunch of learning & fun thrown in.
Founded out of Bangalore by Arvind, Avinash, Guru and Santosh, with the support of our Investors Accel, Omnivore & Mayfield, we raised $7.5 million in Seed, Series A and Debt funding till date from investors include ACCEL, Omnivore, Mayfield among others. Our brand Deep-Rooted.Co which was launched in August 2020 was the first of its kind as India’s Fruits & Vegetables (F&V) which is present in Bangalore & Hyderabad and on a journey of expansion to newer cities which will be managed seamlessly through Tech platform that has been designed and built to transform the Agri-Tech sector.
Deep-Rooted.Co is committed to building a diverse and inclusive workplace and is an equal-opportunity employer.
How is this possible? It’s because we work with smart people. We are looking for Engineers in Bangalore to work with thehttps://www.linkedin.com/in/gururajsrao/"> Product Leader (Founder) andhttps://www.linkedin.com/in/sriki77/"> CTO and this is a meaningful project for us and we are sure you will love the project as it touches everyday life and is fun. This will be a virtual consultation.
We want to start the conversation about the project we have for you, but before that, we want to connect with you to know what’s on your mind. Do drop a note sharing your mobile number and letting us know when we can catch up.
Purpose of the role:
* As a startup we have data distributed all across various sources like Excel, Google Sheets, Databases etc. We need swift decision making based a on a lot of data that exists as we grow. You help us bring together all this data and put it in a data model that can be used in business decision making. * Handle nuances of Excel and Google Sheets API. * Pull data in and manage it growth, freshness and correctness. * Transform data in a format that aids easy decision-making for Product, Marketing and Business Heads. * Understand the business problem, solve the same using the technology and take it to production - no hand offs - full path to production is yours.
Technical expertise:
* Good Knowledge And Experience with Programming languages - Java, SQL,Python. * Good Knowledge of Data Warehousing, Data Architecture. * Experience with Data Transformations and ETL; * Experience with API tools and more closed systems like Excel, Google Sheets etc. * Experience AWS Cloud Platform and Lambda * Experience with distributed data processing tools. * Experiences with container-based deployments on cloud.
Skills:
Java, SQL, Python, Data Build Tool, Lambda, HTTP, Rest API, Extract Transform Load.
- Designing and coding the data warehousing system to desired company specifications
- Conducting preliminary testing of the warehousing environment before data is extracted
- Extracting company data and transferring it into the new warehousing environment
- Testing the new storage system once all the data has been transferred
- Troubleshooting any issues that may arise
- Providing maintenance support
- Consulting with data management teams to get a big-picture idea of the company’s data storage needs
- Presenting the company with warehousing options based on their storage needs
- Experience of 1-3 years in Informatica Power Center
- Excellent knowledge in Oracle database and Pl-SQL such - Stored Procs, Functions, User Defined Functions, table partition, Index, views etc.
- Knowledge of SQL Server database
- Hands on experience in Informatica Power Center and Database performance tuning, optimization including complex Query optimization techniques Understanding of ETL Control Framework
- Experience in UNIX shell/Perl Scripting
- Good communication skills, including the ability to write clearly
- Able to function effectively as a member of a team
- Proactive with respect to personal and technical development
Preferred Education & Experience:
-
Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics or related technical field or equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
-
Well-versed in and 5+ years of hands-on demonstrable experience with:
▪ Data Analysis & Data Modeling
▪ Database Design & Implementation
▪ Database Performance Tuning & Optimization
▪ PL/pgSQL & SQL -
5+ years of hands-on development experience in Relational Database (PostgreSQL/SQL Server/Oracle).
-
5+ years of hands-on development experience in SQL, PL/PgSQL, including stored procedures, functions, triggers, and views.
-
Hands-on experience with demonstrable working experience in Database Design Principles, SQL Query Optimization Techniques, Index Management, Integrity Checks, Statistics, and Isolation levels
-
Hands-on experience with demonstrable working experience in Database Read & Write Performance Tuning & Optimization.
-
Knowledge and Experience working in Domain Driven Design (DDD) Concepts, Object Oriented Programming System (OOPS) Concepts, Cloud Architecture Concepts, NoSQL Database Concepts are added values
-
Knowledge and working experience in Oil & Gas, Financial, & Automotive Domains is a plus
-
Hands-on development experience in one or more NoSQL data stores such as Cassandra, HBase, MongoDB, DynamoDB, Elastic Search, Neo4J, etc. a plus.
- We are looking for an experienced data engineer to join our team.
- The preprocessing involves ETL tasks, using pyspark, AWS Glue, staging data in parquet formats on S3, and Athena
To succeed in this data engineering position, you should care about well-documented, testable code and data integrity. We have devops who can help with AWS permissions.
We would like to build up a consistent data lake with staged, ready-to-use data, and to build up various scripts that will serve as blueprints for various additional data ingestion and transforms.
If you enjoy setting up something which many others will rely on, and have the relevant ETL expertise, we’d like to work with you.
Responsibilities
- Analyze and organize raw data
- Build data pipelines
- Prepare data for predictive modeling
- Explore ways to enhance data quality and reliability
- Potentially, collaborate with data scientists to support various experiments
Requirements
- Previous experience as a data engineer with the above technologies
Minimum 2 years of work experience on Snowflake and Azure storage.
Minimum 3 years of development experience in ETL Tool Experience.
Strong SQL database skills in other databases like Oracle, SQL Server, DB2 and Teradata
Good to have Hadoop and Spark experience.
Good conceptual knowledge on Data-Warehouse and various methodologies.
Working knowledge in any of the scripting like UNIX / Shell
Good Presentation and communication skills.
Should be flexible with the overlapping working hours.
Should be able to work independently and be proactive.
Good understanding of Agile development cycle.
Business development E-commerce
Responsible for planning, connecting, designing, scheduling, and deploying data warehouse systems. Develops, monitors, and maintains ETL processes, reporting applications, and data warehouse design. |
Role and Responsibility |
· Plan, create, coordinate, and deploy data warehouses. · Design end user interface. · Create best practices for data loading and extraction. · Develop data architecture, data modeling, and ETFL mapping solutions within structured data warehouse environment. · Develop reporting applications and data warehouse consistency. · Facilitate requirements gathering using expert listening skills and develop unique simple solutions to meet the immediate and long-term needs of business customers. · Supervise design throughout implementation process. · Design and build cubes while performing custom scripts. · Develop and implement ETL routines according to the DWH design and architecture. · Support the development and validation required through the lifecycle of the DWH and Business Intelligence systems, maintain user connectivity, and provide adequate security for data warehouse. · Monitor the DWH and BI systems performance and integrity provide corrective and preventative maintenance as required. · Manage multiple projects at once. |
DESIRABLE SKILL SET |
· Experience with technologies such as MySQL, MongoDB, SQL Server 2008, as well as with newer ones like SSIS and stored procedures · Exceptional experience developing codes, testing for quality assurance, administering RDBMS, and monitoring of database · High proficiency in dimensional modeling techniques and their applications · Strong analytical, consultative, and communication skills; as well as the ability to make good judgment and work with both technical and business personnel · Several years working experience with Tableau, MicroStrategy, Information Builders, and other reporting and analytical tools · Working knowledge of SAS and R code used in data processing and modeling tasks · Strong experience with Hadoop, Impala, Pig, Hive, YARN, and other “big data” technologies such as AWS Redshift or Google Big Data
|