20+ Data Warehouse (DWH) Jobs in Pune | Data Warehouse (DWH) Job openings in Pune
Apply to 20+ Data Warehouse (DWH) Jobs in Pune on CutShort.io. Explore the latest Data Warehouse (DWH) Job opportunities across top companies like Google, Amazon & Adobe.
Job Description for QA Engineer:
- 6-10 years of experience in ETL Testing, Snowflake, DWH Concepts.
- Strong SQL knowledge & debugging skills are a must.
- Experience on Azure and Snowflake Testing is plus
- Experience with Qlik Replicate and Compose tools (Change Data Capture) tools is considered a plus
- Strong Data warehousing Concepts, ETL tools like Talend Cloud Data Integration, Pentaho/Kettle tool
- Experience in JIRA, Xray defect management toolis good to have.
- Exposure to the financial domain knowledge is considered a plus
- Testing the data-readiness (data quality) address code or data issues
- Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions
- Demonstrate strong collaborative experience across regions (APAC, EMEA and NA) to effectively and efficiently identify root cause of code/data issues and come up with a permanent solution
- Prior experience with State Street and Charles River Development (CRD) considered a plus
- Experience in tools such as PowerPoint, Excel, SQL
- Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus
Key Attributes include:
- Team player with professional and positive approach
- Creative, innovative and able to think outside of the box
- Strong attention to detail during root cause analysis and defect issue resolution
- Self-motivated & self-sufficient
- Effective communicator both written and verbal
- Brings a high level of energy with enthusiasm to generate excitement and motivate the team
- Able to work under pressure with tight deadlines and/or multiple projects
- Experience in negotiation and conflict resolution
at AxionConnect Infosolutions Pvt Ltd
Job Location: Hyderabad/Bangalore/ Chennai/Pune/Nagpur
Notice period: Immediate - 15 days
1. Python Developer with Snowflake
Job Description :
- 5.5+ years of Strong Python Development Experience with Snowflake.
- Strong hands of experience with SQL ability to write complex queries.
- Strong understanding of how to connect to Snowflake using Python, should be able to handle any type of files
- Development of Data Analysis, Data Processing engines using Python
- Good Experience in Data Transformation using Python.
- Experience in Snowflake data load using Python.
- Experience in creating user-defined functions in Snowflake.
- Snowsql implementation.
- Knowledge of query performance tuning will be added advantage.
- Good understanding of Datawarehouse (DWH) concepts.
- Interpret/analyze business requirements & functional specification
- Good to have DBT, FiveTran, and AWS Knowledge.
Responsibilities:
• Designing Hive/HCatalog data model includes creating table definitions, file formats, compression techniques for Structured & Semi-structured data processing
• Implementing Spark processing based ETL frameworks
• Implementing Big data pipeline for Data Ingestion, Storage, Processing & Consumption
• Modifying the Informatica-Teradata & Unix based data pipeline
• Enhancing the Talend-Hive/Spark & Unix based data pipelines
• Develop and Deploy Scala/Python based Spark Jobs for ETL processing
• Strong SQL & DWH concepts.
Preferred Background:
• Function as integrator between business needs and technology solutions, helping to create technology solutions to meet clients’ business needs
• Lead project efforts in defining scope, planning, executing, and reporting to stakeholders on strategic initiatives
• Understanding of EDW system of business and creating High level design document and low level implementation document
• Understanding of Big Data Lake system of business and creating High level design document and low level implementation document
• Designing Big data pipeline for Data Ingestion, Storage, Processing & Consumption
• Work with various stakeholders, understand requirements, and build solutions/data pipelines
that address the needs at scale
• Bring key workloads to the clients’ Snowflake environment using scalable, reusable data
ingestion and processing frameworks to transform a variety of datasets
• Apply best practices for Snowflake architecture, ELT and data models
Skills - 50% of below:
• A passion for all things data; understanding how to work with it at scale, and more importantly,
knowing how to get the most out of it
• Good understanding of native Snowflake capabilities like data ingestion, data sharing, zero-copy
cloning, tasks, Snowpipe etc
• Expertise in data modeling, with a good understanding of modeling approaches like Star
schema and/or Data Vault
• Experience in automating deployments
• Experience writing code in Python, Scala or Java or PHP
• Experience in ETL/ELT either via a code-first approach or using low-code tools like AWS Glue,
Appflow, Informatica, Talend, Matillion, Fivetran etc
• Experience in one or more of the AWS especially in relation to integration with Snowflake
• Familiarity with data visualization tools like Tableau or PowerBI or Domo or any similar tool
• Experience with Data Virtualization tools like Trino, Starburst, Denodo, Data Virtuality, Dremio
etc.
• Certified SnowPro Advanced: Data Engineer is a must.
• S/he possesses a wide exposure to complete lifecycle of data starting from creation to consumption
• S/he has in the past built repeatable tools / data-models to solve specific business problems
• S/he should have hand-on experience of having worked on projects (either as a consultant or with in a company) that needed them to
o Provide consultation to senior client personnel o Implement and enhance data warehouses or data lakes.
o Worked with business teams or was a part of the team that implemented process re-engineering driven by data analytics/insights
• Should have deep appreciation of how data can be used in decision-making
• Should have perspective on newer ways of solving business problems. E.g. external data, innovative techniques, newer technology
• S/he must have a solution-creation mindset.
Ability to design and enhance scalable data platforms to address the business need
• Working experience on data engineering tool for one or more cloud platforms -Snowflake, AWS/Azure/GCP
• Engage with technology teams from Tredence and Clients to create last mile connectivity of the solutions
o Should have experience of working with technology teams
• Demonstrated ability in thought leadership – Articles/White Papers/Interviews
Mandatory Skills Program Management, Data Warehouse, Data Lake, Analytics, Cloud Platform
XpressBees – a logistics company started in 2015 – is amongst the fastest growing
companies of its sector. While we started off rather humbly in the space of
ecommerce B2C logistics, the last 5 years have seen us steadily progress towards
expanding our presence. Our vision to evolve into a strong full-service logistics
organization reflects itself in our new lines of business like 3PL, B2B Xpress and cross
border operations. Our strong domain expertise and constant focus on meaningful
innovation have helped us rapidly evolve as the most trusted logistics partner of
India. We have progressively carved our way towards best-in-class technology
platforms, an extensive network reach, and a seamless last mile management
system. While on this aggressive growth path, we seek to become the one-stop-shop
for end-to-end logistics solutions. Our big focus areas for the very near future
include strengthening our presence as service providers of choice and leveraging the
power of technology to improve efficiencies for our clients.
Job Profile
As a Lead Data Engineer in the Data Platform Team at XpressBees, you will build the data platform
and infrastructure to support high quality and agile decision-making in our supply chain and logistics
workflows.
You will define the way we collect and operationalize data (structured / unstructured), and
build production pipelines for our machine learning models, and (RT, NRT, Batch) reporting &
dashboarding requirements. As a Senior Data Engineer in the XB Data Platform Team, you will use
your experience with modern cloud and data frameworks to build products (with storage and serving
systems)
that drive optimisation and resilience in the supply chain via data visibility, intelligent decision making,
insights, anomaly detection and prediction.
What You Will Do
• Design and develop data platform and data pipelines for reporting, dashboarding and
machine learning models. These pipelines would productionize machine learning models
and integrate with agent review tools.
• Meet the data completeness, correction and freshness requirements.
• Evaluate and identify the data store and data streaming technology choices.
• Lead the design of the logical model and implement the physical model to support
business needs. Come up with logical and physical database design across platforms (MPP,
MR, Hive/PIG) which are optimal physical designs for different use cases (structured/semi
structured). Envision & implement the optimal data modelling, physical design,
performance optimization technique/approach required for the problem.
• Support your colleagues by reviewing code and designs.
• Diagnose and solve issues in our existing data pipelines and envision and build their
successors.
Qualifications & Experience relevant for the role
• A bachelor's degree in Computer Science or related field with 6 to 9 years of technology
experience.
• Knowledge of Relational and NoSQL data stores, stream processing and micro-batching to
make technology & design choices.
• Strong experience in System Integration, Application Development, ETL, Data-Platform
projects. Talented across technologies used in the enterprise space.
• Software development experience using:
• Expertise in relational and dimensional modelling
• Exposure across all the SDLC process
• Experience in cloud architecture (AWS)
• Proven track record in keeping existing technical skills and developing new ones, so that
you can make strong contributions to deep architecture discussions around systems and
applications in the cloud ( AWS).
• Characteristics of a forward thinker and self-starter that flourishes with new challenges
and adapts quickly to learning new knowledge
• Ability to work with a cross functional teams of consulting professionals across multiple
projects.
• Knack for helping an organization to understand application architectures and integration
approaches, to architect advanced cloud-based solutions, and to help launch the build-out
of those systems
• Passion for educating, training, designing, and building end-to-end systems.
- 5+ years of experience in a Data Engineering role on cloud environment
- Must have good experience in Scala/PySpark (preferably on data-bricks environment)
- Extensive experience with Transact-SQL.
- Experience in Data-bricks/Spark.
- Strong experience in Dataware house projects
- Expertise in database development projects with ETL processes.
- Manage and maintain data engineering pipelines
- Develop batch processing, streaming and integration solutions
- Experienced in building and operationalizing large-scale enterprise data solutions and applications
- Using one or more of Azure data and analytics services in combination with custom solutions
- Azure Data Lake, Azure SQL DW (Synapse), and SQL Database products or equivalent products from other cloud services providers
- In-depth understanding of data management (e. g. permissions, security, and monitoring).
- Cloud repositories for e.g. Azure GitHub, Git
- Experience in an agile environment (Prefer Azure DevOps).
Good to have
- Manage source data access security
- Automate Azure Data Factory pipelines
- Continuous Integration/Continuous deployment (CICD) pipelines, Source Repositories
- Experience in implementing and maintaining CICD pipelines
- Power BI understanding, Delta Lake house architecture
- Knowledge of software development best practices.
- Excellent analytical and organization skills.
- Effective working in a team as well as working independently.
- Strong written and verbal communication skills.
- Expertise in database development projects and ETL processes.
Job Title: Product Manager
Job Description
Bachelor or master’s degree in computer science or equivalent experience.
Worked as Product Owner before and took responsibility for a product or project delivery.
Well-versed with data warehouse modernization to Big Data and Cloud environments.
Good knowledge* of any of the Cloud (AWS/Azure/GCP) – Must Have
Practical experience with continuous integration and continuous delivery workflows.
Self-motivated with strong organizational/prioritization skills and ability to multi-task with close attention to detail.
Good communication skills
Experience in working within a distributed agile team
Experience in handling migration projects – Good to Have
*Data Ingestion, Processing, and Orchestration knowledge
Roles & Responsibilities
Responsible for coming up with innovative and novel ideas for the product.
Define product releases, features, and roadmap.
Collaborate with product teams on defining product objectives, including creating a product roadmap, delivery, market research, customer feedback, and stakeholder inputs.
Work with the Engineering teams to communicate release goals and be a part of the product lifecycle. Work closely with the UX and UI team to create the best user experience for the end customer.
Work with the Marketing team to define GTM activities.
Interface with Sales & Customer teams to identify customer needs and product gaps
Market and competition analysis activities.
Participate in the Agile ceremonies with the team, define epics, user stories, acceptance criteria
Ensure product usability from the end-user perspective
Mandatory Skills
Product Management, DWH, Big Data
4-6 years of total experience in data warehousing and business intelligence
3+ years of solid Power BI experience (Power Query, M-Query, DAX, Aggregates)
2 years’ experience building Power BI using cloud data (Snowflake, Azure Synapse, SQL DB, data lake)
Strong experience building visually appealing UI/UX in Power BI
Understand how to design Power BI solutions for performance (composite models, incremental refresh, analysis services)
Experience building Power BI using large data in direct query mode
Expert SQL background (query building, stored procedure, optimizing performance)
Greetings! We are looking for Product Manager for our Data modernization product. We need a resource with good knowledge on Big Data/DWH. should have strong Stakeholders management and Presentation skills
Responsibilities:
- Take the lead in building tools to increase the productivity of our business and product teams
- Build client facing portal to support the submission and integration of games from external developers
- Collaborate with teams in a range of disciplines
- Clearly communicate challenges and progress to stakeholders
- Adopt and learn new technologies
Basic Qualifications:
- 5+ years professional experience in software development and a BS or MS in Computer Science or related field
- Solid understanding of Javascript, NodeJS, PHP, SQL, C#
- Strong knowledge of AWS Cloud architecture, services, and DevOps
- Adhere to software design patterns and have knowledge of algorithms
- Experience with databases and database programming (MySQL, NoSQL, etc.)
- Comfortable understanding and implementing REST APIs, knowledge of AJAX patterns and principles
- In-depth knowledge of modern HTML/CSS
- Strong understanding of web architecture, security, cookies, reverse-proxies
- Have a solid knowledge of web debugging tools (Firebug or Chrome Developer Console)
Pluses:
- Bonus points for data warehouse experience (Snowflake, Redshift)
- Experience in game programming and Unity development
- Knowledge of unit testing and test driven development
- A passion for games (of any type) as well as a passion for code
- Knowledge of mobile gaming metrics and the mobile gaming industry
Perks:
- Free medical, dental, and vision insurance
- Work from home stipend on each paycheck
- Competitive Salary
- Flexible Time Off - work hard and take time when you need it
- Interested? Send us your resume and let's talk!
• Responsible for designing, deploying, and maintaining analytics environment that processes data at scale
• Contribute design, configuration, deployment, and documentation for components that manage data ingestion, real time streaming, batch processing, data extraction, transformation, enrichment, and loading of data into a variety of cloud data platforms, including AWS and Microsoft Azure
• Identify gaps and improve the existing platform to improve quality, robustness, maintainability, and speed
• Evaluate new and upcoming big data solutions and make recommendations for adoption to extend our platform to meet advanced analytics use cases, such as predictive modeling and recommendation engines
• Data Modelling , Data Warehousing on Cloud Scale using Cloud native solutions.
• Perform development, QA, and dev-ops roles as needed to ensure total end to end responsibility of solutions
COMPETENCIES
• Experience building, maintaining, and improving Data Models / Processing Pipeline / routing in large scale environments
• Fluency in common query languages, API development, data transformation, and integration of data streams
• Strong experience with large dataset platforms such as (e.g. Amazon EMR, Amazon Redshift, AWS Lambda & Fargate, Amazon Athena, Azure SQL Database, Azure Database for PostgreSQL, Azure Cosmos DB , Data Bricks)
• Fluency in multiple programming languages, such as Python, Shell Scripting, SQL, Java, or similar languages and tools appropriate for large scale data processing.
• Experience with acquiring data from varied sources such as: API, data queues, flat-file, remote databases
• Understanding of traditional Data Warehouse components (e.g. ETL, Business Intelligence Tools)
• Creativity to go beyond current tools to deliver the best solution to the problem
A global business process management company
Designation – Deputy Manager - TS
Job Description
- Total of 8/9 years of development experience Data Engineering . B1/BII role
- Minimum of 4/5 years in AWS Data Integrations and should be very good on Data modelling skills.
- Should be very proficient in end to end AWS Data solution design, that not only includes strong data ingestion, integrations (both Data @ rest and Data in Motion) skills but also complete DevOps knowledge.
- Should have experience in delivering at least 4 Data Warehouse or Data Lake Solutions on AWS.
- Should be very strong experience on Glue, Lambda, Data Pipeline, Step functions, RDS, CloudFormation etc.
- Strong Python skill .
- Should be an expert in Cloud design principles, Performance tuning and cost modelling. AWS certifications will have an added advantage
- Should be a team player with Excellent communication and should be able to manage his work independently with minimal or no supervision.
- Life Science & Healthcare domain background will be a plus
Qualifications
BE/Btect/ME/MTech
We at Datametica Solutions Private Limited are looking for an SQL Lead / Architect who has a passion for the cloud with knowledge of different on-premises and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike.
Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.
Job Description :
Experience: 6+ Years
Work Location: Pune / Hyderabad
Technical Skills :
- Good programming experience as an Oracle PL/SQL, MySQL, and SQL Server Developer
- Knowledge of database performance tuning techniques
- Rich experience in a database development
- Experience in Designing and Implementation Business Applications using the Oracle Relational Database Management System
- Experience in developing complex database objects like Stored Procedures, Functions, Packages and Triggers using SQL and PL/SQL
Required Candidate Profile :
- Excellent communication, interpersonal, analytical skills and strong ability to drive teams
- Analyzes data requirements and data dictionary for moderate to complex projects • Leads data model related analysis discussions while collaborating with Application Development teams, Business Analysts, and Data Analysts during joint requirements analysis sessions
- Translate business requirements into technical specifications with an emphasis on highly available and scalable global solutions
- Stakeholder management and client engagement skills
- Strong communication skills (written and verbal)
About Us!
A global leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.
We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.
Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.
We have our own products!
Eagle Data warehouse Assessment & Migration Planning Product
Raven Automated Workload Conversion Product
Pelican Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.
Why join us!
Datametica is a place to innovate, bring new ideas to live, and learn new things. We believe in building a culture of innovation, growth, and belonging. Our people and their dedication over these years are the key factors in achieving our success.
Benefits we Provide!
Working with Highly Technical and Passionate, mission-driven people
Subsidized Meals & Snacks
Flexible Schedule
Approachable leadership
Access to various learning tools and programs
Pet Friendly
Certification Reimbursement Policy
Check out more about us on our website below!
www.datametica.com
- Must have 4 to 7 years of experience in ETL Design and Development using Informatica Components.
- Should have extensive knowledge in Unix shell scripting.
- Understanding of DW principles (Fact, Dimension tables, Dimensional Modelling and Data warehousing concepts).
- Research, development, document and modification of ETL processes as per data architecture and modeling requirements.
- Ensure appropriate documentation for all new development and modifications of the ETL processes and jobs.
- Should be good in writing complex SQL queries.
- • Selected candidates will be provided training opportunities on one or more of following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume and
- Kafka would get chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
- Will play an active role in setting up the Modern data platform based on Cloud and Big Data
- Would be part of teams with rich experience in various aspects of distributed systems and computing.
Datametica is looking for talented SQL engineers who would get training & the opportunity to work on Cloud and Big Data Analytics.
Mandatory Skills:
- Strong in SQL development
- Hands-on at least one scripting language - preferably shell scripting
- Development experience in Data warehouse projects
Opportunities:
- Selected candidates will be provided training opportunities on one or more of the following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume, and KafkaWould get a chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
- Will play an active role in setting up the Modern data platform based on Cloud and Big Data
- Would be part of teams with rich experience in various aspects of distributed systems and computing
at DataMetica
Job Description
Experience : 10+ Years
Location : Pune
Job Requirements:
- Minimum of 10+ years of experience with a proven record of increased responsibility
- Hands on experience in design, development and managing Big Data, Cloud, Data warehousing
- and Business Intelligence projects
- Experience of managing projects in Big Data, Cloud, Data warehousing, Business Intelligence
- Using open source or top of the line tools and technologies
- Good knowledge of Dimensional Modeling
- Experience of working with any ETL and BI Reporting tools
- Experience of managing medium to large projects, preferably on Big Data
- Proven experience in project planning, estimation, execution and implementation of medium to
- large projects
- Should be able to effectively communicate in English
- Strong management and leadership skills, with proven ability to develop and manage client
- relationships
- Proven problem-solving skills from both technical and managerial perspectives
- Attention to detail and a commitment to excellence and high standards
- Excellent interpersonal and communication skills, both verbal and written
- Position is remote with occasional travel to other offices, client sites, conventions, training
- locations, etc.
- Bachelor’s degree in Computer Science, Business\Economics, or a related field or demonstrated,
- equivalent/practical knowledge or experience
Job Responsibilities:
- Day to day project management, scrum and agile management including project planning, delivery
- and execution of Big Data and
- Primary Point of contact for customer related to all project engagements, delivery and project
- escalations
- Design right architecture and technology stack depending on business requirement on Cloud / Big
- Data and BI related technologies both some on-premise and on cloud
- Liaise with key stakeholders to define the Cloud / Big data solutions roadmap, prioritize the
- deliverables
- Responsible for end to end project delivery of Cloud / Big Data Solutions from project estimations,
- project planning, resourcing and monitoring perspective
- Drive and participate in requirements gathering workshops, estimation discussions, design
- meetings and status review meetings
- Support & assist the team in resolving issues during testing and when the system is in production
- Involved in the full customer lifecycle with a goal to make customers successful and increase
- revenue and retention
- Interface with the offshore engineering team to solve customer issues
- Develop programs that meet customer needs with respect to functionality, performance,
- scalability, reliability, schedule, principles and recognized industry standards
- Requirement analysis and documentation
- Manage day-to-day operational aspects of a project and scope
- Prepare for engagement reviews and quality assurance procedures
- Visit and/or host clients to strengthen business relationships
Job Description:
- Working knowledge and hands-on experience of Big Data / Hadoop tools and technologies.
- Experience of working in Pig, Hive, Flume, Sqoop, Kafka etc.
- Database development experience with a solid understanding of core database concepts, relational database design, ODS & DWH.
- Expert level knowledge of SQL and scripting preferably UNIX shell scripting, Perl scripting.
- Working knowledge of Data integration solution and well-versed with any ETL tool (Informatica / Datastage / Abinitio/Pentaho etc).
- Strong problem solving and logical reasoning ability.
- Excellent understanding of all aspects of the Software Development Lifecycle.
- Excellent written and verbal communication skills.
- Experience in Java will be an added advantage
- Knowledge of object oriented programming concepts
- Exposure to ISMS policies and procedures.
• Develop and maintain CI/CD tools to build and deploy scalable web and responsive applications in production environment
• Design and implement monitoring solutions that identify both system bottlenecks and production issues
• Design and implement workflows for continuous integration, including provisioning, deployment, testing, and version control of the software.
• Develop self-service solutions for the engineering team in order to deliver sites/software with great speed and quality
o Automating Infra creation
o Provide easy to use solutions to engineering team
• Conduct research, tests, and implements new metrics collection systems that can be reused and applied as engineering best practices
o Update our processes and design new processes as needed.
o Establish DevOps Engineer team best practices.
o Stay current with industry trends and source new ways for our business to improve.
• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
• Manage timely resolution of all critical and/or complex problems
• Maintain, monitor, and establish best practices for containerized environments.
• Mentor new DevOps engineers
What you will bring
• The desire to work in fast-paced environment.
• 5+ years’ experience building, maintaining, and deploying production infrastructures in AWS or other cloud providers
• Containerization experience with applications deployed on Docker and Kubernetes
• Understanding of NoSQL and Relational Database with respect to deployment and horizontal scalability
• Demonstrated knowledge of Distributed and Scalable systems Experience with maintaining and deployment of critical infrastructure components through Infrastructure-as-Code and configuration management tooling across multiple environments (Ansible, Terraform etc)
• Strong knowledge of DevOps and CI/CD pipeline (GitHub, BitBucket, Artifactory etc)
• Strong understanding of cloud and infrastructure components (server, storage, network, data, and applications) to deliver end-to-end cloud Infrastructure architectures and designs and recommendations
o AWS services like S3, CloudFront, Kubernetes, RDS, Data Warehouses to come up with architecture/suggestions for new use cases.
• Test our system integrity, implemented designs, application developments and other processes related to infrastructure, making improvements as needed
Good to have
• Experience with code quality tools, static or dynamic code analysis and compliance and undertaking and resolving issues identified from vulnerability and compliance scans of our infrastructure
• Good knowledge of REST/SOAP/JSON web service API implementation
•
- Build a team with skills in ETL, reporting, MDM and ad-hoc analytics support
- Build technical solutions using latest open source and cloud based technologies
- Work closely with offshore senior consultant, onshore team and client's business and IT teams to gather project requirements
- Assist overall project execution from India - starting from project planning, team formation system design and development, testing, UAT and deployment
- Build demos and POCs in support of business development for new and existing clients
- Prepare project documents and PowerPoint presentations for client communication
- Conduct training sessions to train associates and help shape their growth