50+ Data Warehouse (DWH) Jobs in India
Apply to 50+ Data Warehouse (DWH) Jobs on CutShort.io. Find your next job, effortlessly. Browse Data Warehouse (DWH) Jobs and apply today!
Experience: 12-15 Years with 7 years in Big Data, Cloud, and Analytics.
Key Responsibilities:
- Technical Project Management:
- o Lead the end-to-end technical delivery of multiple projects in Big Data, Cloud, and Analytics. Lead teams in technical solutioning, design and development
- o Develop detailed project plans, timelines, and budgets, ensuring alignment with client expectations and business goals.
- o Monitor project progress, manage risks, and implement corrective actions as needed to ensure timely and quality delivery.
- Client Engagement and Stakeholder Management:
- o Build and maintain strong client relationships, acting as the primary point of contact for project delivery.
- o Understand client requirements, anticipate challenges, and provide proactive solutions.
- o Coordinate with internal and external stakeholders to ensure seamless project execution.
- o Communicate project status, risks, and issues to senior management and stakeholders in a clear and timely manner.
- Team Leadership:
- o Lead and mentor a team of data engineers, analysts, and project managers.
- o Ensure effective resource allocation and utilization across projects.
- o Foster a culture of collaboration, continuous improvement, and innovation within the team.
- Technical and Delivery Excellence:
- o Leverage Data Management Expertise and Experience to guide and lead the technical conversations effectively. Identify and understand technical areas of support needed to the team and work towards resolving them – either by own expertise or networking with internal and external stakeholders to unblock the team
- o Implement best practices in project management, delivery, and quality assurance.
- o Drive continuous improvement initiatives to enhance delivery efficiency and client satisfaction.
- o Stay updated with the latest trends and advancements in Big Data, Cloud, and Analytics technologies.
Requirements:
- Experience in IT delivery management, particularly in Big Data, Cloud, and Analytics.
- Strong knowledge of project management methodologies and tools (e.g., Agile, Scrum, PMP).
- Excellent leadership, communication, and stakeholder management skills.
- Proven ability to manage large, complex projects with multiple stakeholders.
- Strong critical thinking skills and the ability to make decisions under pressure.
Preferred Qualifications:
- Bachelor’s degree in computer science, Information Technology, or a related field.
- Relevant certifications in Big Data, Cloud platforms like GCP, Azure, AWS, Snowflake, Databricks, Project Management or similar areas is preferred.
Job Description for QA Engineer:
- 6-10 years of experience in ETL Testing, Snowflake, DWH Concepts.
- Strong SQL knowledge & debugging skills are a must.
- Experience on Azure and Snowflake Testing is plus
- Experience with Qlik Replicate and Compose tools (Change Data Capture) tools is considered a plus
- Strong Data warehousing Concepts, ETL tools like Talend Cloud Data Integration, Pentaho/Kettle tool
- Experience in JIRA, Xray defect management toolis good to have.
- Exposure to the financial domain knowledge is considered a plus
- Testing the data-readiness (data quality) address code or data issues
- Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions
- Demonstrate strong collaborative experience across regions (APAC, EMEA and NA) to effectively and efficiently identify root cause of code/data issues and come up with a permanent solution
- Prior experience with State Street and Charles River Development (CRD) considered a plus
- Experience in tools such as PowerPoint, Excel, SQL
- Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus
Key Attributes include:
- Team player with professional and positive approach
- Creative, innovative and able to think outside of the box
- Strong attention to detail during root cause analysis and defect issue resolution
- Self-motivated & self-sufficient
- Effective communicator both written and verbal
- Brings a high level of energy with enthusiasm to generate excitement and motivate the team
- Able to work under pressure with tight deadlines and/or multiple projects
- Experience in negotiation and conflict resolution
Qualifications:*
1. 10+ years of experience, with 3+ years as Database Architect or related role
2. Technical expertise in data schemas, Amazon Redshift, Amazon S3, and Data Lakes
3. Analytical skills in data warehouse design and business intelligence
4. Strong problem-solving and strategic thinking abilities
5. Excellent communication skills
6. Bachelor's degree in Computer Science or related field; Master's degree preferred
*Skills Required:*
1. Database architecture and design
2. Data warehousing and business intelligence
3. Cloud-based data infrastructure (Amazon Redshift, S3, Data Lakes)
4. Data governance and security
5. Analytical and problem-solving skills
6. Strategic thinking and communication
7. Collaboration and team management
- As a data engineer, you will build systems that collect, manage, and convert raw data into usable information for data scientists and business analysts to interpret. You ultimate goal is to make data accessible for organizations to optimize their performance.
- Work closely with PMs, business analysts to build and improvise data pipelines, identify and model business objects • Write scripts implementing data transformation, data structures, metadata for bringing structure for partially unstructured data and improvise quality of data
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL
- Own data pipelines - Monitoring, testing, validating and ensuring meaningful data exists in data warehouse with high level of data quality
- What we look for in the candidate is strong analytical skills with the ability to collect, organise, analyse, and disseminate significant amounts of information with attention to detail and accuracy
- Create long term and short-term design solutions through collaboration with colleagues
- Proactive to experiment with new tools
- Strong programming skill in python
- Skillset: Python, SQL, ETL frameworks, PySpark and Snowflake
- Strong communication and interpersonal skills to interact with senior-level management regarding the implementation of changes
- Willingness to learn and eagerness to contribute to projects
- Designing datawarehouse and most appropriate DB schema for the data product
- Positive attitude and proactive problem-solving mindset
- Experience in building data pipelines and connectors
- Knowledge on AWS cloud services would be preferred
Daily and monthly responsibilities
- Review and coordinate with business application teams on data delivery requirements.
- Develop estimation and proposed delivery schedules in coordination with development team.
- Develop sourcing and data delivery designs.
- Review data model, metadata and delivery criteria for solution.
- Review and coordinate with team on test criteria and performance of testing.
- Contribute to the design, development and completion of project deliverables.
- Complete in-depth data analysis and contribution to strategic efforts
- Complete understanding of how we manage data with focus on improvement of how data is sourced and managed across multiple business areas.
Basic Qualifications
- Bachelor’s degree.
- 5+ years of data analysis working with business data initiatives.
- Knowledge of Structured Query Language (SQL) and use in data access and analysis.
- Proficient in data management including data analytical capability.
- Excellent verbal and written communications also high attention to detail.
- Experience with Python.
- Presentation skills in demonstrating system design and data analysis solutions.
1. Bridging the gap between IT and the business using data analytics to assess processes, determine requirements and deliver data-driven recommendations and reports to executives and stakeholders.
2. Ability to search, extract, transform and load data from various databases, cleanse and refine data until it is fit-for-purpose
3. Work within various time constraints to meet critical business needs, while measuring and identifying activities performed and ensuring service requirements are met
4. Prioritization of issues to meet deadlines while ensuring high-quality delivery
5. Ability to pull data and to perform ad hoc reporting and analysis as needed
6. Ability to adapt quickly to new and changing technical environments as well as strong analytical and problem-solving abilities
7. Strong interpersonal and presentation skills
SKILLS:
1. Advanced skills in designing reporting interfaces and interactive dashboards in Google Sheets and Excel
2. Experience working with senior decision-makers
3. Strong advanced SQL/MySQL and Python skills with the ability to fetch data from the Data Warehouse as per the stakeholder's requirement
4. Good Knowledge and experience in Excel VBA and advanced excel
5. Good Experience in building Tableau analytical Dashboards as per the stake holder's reporting requirements
6. Strong communication/interpersonal skills
PERSONA:
1. Experience in working on adhoc requirements
2. Ability to toggle around with shifting priorities
3. Experience in working for Fintech or E-commerce industry is preferable
4. Engineering 2+ years of experience as a Business Analyst for the finance processes
at AxionConnect Infosolutions Pvt Ltd
Job Location: Hyderabad/Bangalore/ Chennai/Pune/Nagpur
Notice period: Immediate - 15 days
1. Python Developer with Snowflake
Job Description :
- 5.5+ years of Strong Python Development Experience with Snowflake.
- Strong hands of experience with SQL ability to write complex queries.
- Strong understanding of how to connect to Snowflake using Python, should be able to handle any type of files
- Development of Data Analysis, Data Processing engines using Python
- Good Experience in Data Transformation using Python.
- Experience in Snowflake data load using Python.
- Experience in creating user-defined functions in Snowflake.
- Snowsql implementation.
- Knowledge of query performance tuning will be added advantage.
- Good understanding of Datawarehouse (DWH) concepts.
- Interpret/analyze business requirements & functional specification
- Good to have DBT, FiveTran, and AWS Knowledge.
DATA ENGINEER
Overview
They started with a singular belief - what is beautiful cannot and should not be defined in marketing meetings. It's defined by the regular people like us, our sisters, our next-door neighbours, and the friends we make on the playground and in lecture halls. That's why we stand for people-proving everything we do. From the inception of a product idea to testing the final formulations before launch, our consumers are a part of each and every process. They guide and inspire us by sharing their stories with us. They tell us not only about the product they need and the skincare issues they face but also the tales of their struggles, dreams and triumphs. Skincare goes deeper than skin. It's a form of self-care for many. Wherever someone is on this journey, we want to cheer them on through the products we make, the content we create and the conversations we have. What we wish to build is more than a brand. We want to build a community that grows and glows together - cheering each other on, sharing knowledge, and ensuring people always have access to skincare that really works.
Job Description:
We are seeking a skilled and motivated Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, developing, and maintaining the data infrastructure and systems that enable efficient data collection, storage, processing, and analysis. You will collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to implement data pipelines and ensure the availability, reliability, and scalability of our data platform.
Responsibilities:
Design and implement scalable and robust data pipelines to collect, process, and store data from various sources.
Develop and maintain data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation.
Optimize and tune the performance of data systems to ensure efficient data processing and analysis.
Collaborate with data scientists and analysts to understand data requirements and implement solutions for data modeling and analysis.
Identify and resolve data quality issues, ensuring data accuracy, consistency, and completeness.
Implement and maintain data governance and security measures to protect sensitive data.
Monitor and troubleshoot data infrastructure, perform root cause analysis, and implement necessary fixes.
Stay up-to-date with emerging technologies and industry trends in data engineering and recommend their adoption when appropriate.
Qualifications:
Bachelor’s or higher degree in Computer Science, Information Systems, or a related field.
Proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems.
Strong programming skills in languages such as Python, Java, or Scala.
Experience with big data technologies and frameworks like Hadoop, Spark, or Kafka.
Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle).
Familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery).
Solid understanding of data modeling, data warehousing, and ETL principles.
Knowledge of data integration techniques and tools (e.g., Apache Nifi, Talend, or Informatica).
Strong problem-solving and analytical skills, with the ability to handle complex data challenges.
Excellent communication and collaboration skills to work effectively in a team environment.
Preferred Qualifications:
Advanced knowledge of distributed computing and parallel processing.
Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink).
Familiarity with machine learning concepts and frameworks (e.g., TensorFlow, PyTorch).
Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes).
Experience with data visualization and reporting tools (e.g., Tableau, Power BI).
Certification in relevant technologies or data engineering disciplines.
It's regarding a permanent opening with Data Semantics
Data Semantics
We are Product base company and Microsoft Gold Partner
Data Semantics is an award-winning Data Science company with a vision to empower every organization to harness the full potential of its data assets. In order to achieve this, we provide Artificial Intelligence, Big Data and Data Warehousing solutions to enterprises across the globe. Data Semantics was listed as one of the top 20 Analytics companies by Silicon India 2018 & CIO Review India 2014 as one of the Top 20 BI companies. We are headquartered in Bangalore, India with our offices in 6 global locations including USA United Kingdom, Canada, United Arab Emirates (Dubai Abu Dhabi), and Mumbai. Our mission is to enable our people to learn the art of data management and visualization to help our customers make quick and smart decisions.
Our Services include:
Business Intelligence & Visualization
App and Data Modernization
Low Code Application Development
Artificial Intelligence
Internet of Things
Data Warehouse Modernization
Robotic Process Automation
Advanced Analytics
Our Products:
Sirius – World’s most agile conversational AI platform
Serina
Conversational Analytics
Contactless Attendance Management System
Company URL: https://datasemantics.co
JD:
MSBI
SSAS
SSRS
SSIS
Datawarehousing
SQL
at Gipfel & Schnell Consultings Pvt Ltd
Qualifications & Experience:
▪ 2 - 4 years overall experience in ETLs, data pipeline, Data Warehouse development and database design
▪ Software solution development using Hadoop Technologies such as MapReduce, Hive, Spark, Kafka, Yarn/Mesos etc.
▪ Expert in SQL, worked on advanced SQL for at least 2+ years
▪ Good development skills in Java, Python or other languages
▪ Experience with EMR, S3
▪ Knowledge and exposure to BI applications, e.g. Tableau, Qlikview
▪ Comfortable working in an agile environment
Responsibilities:
• Designing Hive/HCatalog data model includes creating table definitions, file formats, compression techniques for Structured & Semi-structured data processing
• Implementing Spark processing based ETL frameworks
• Implementing Big data pipeline for Data Ingestion, Storage, Processing & Consumption
• Modifying the Informatica-Teradata & Unix based data pipeline
• Enhancing the Talend-Hive/Spark & Unix based data pipelines
• Develop and Deploy Scala/Python based Spark Jobs for ETL processing
• Strong SQL & DWH concepts.
Preferred Background:
• Function as integrator between business needs and technology solutions, helping to create technology solutions to meet clients’ business needs
• Lead project efforts in defining scope, planning, executing, and reporting to stakeholders on strategic initiatives
• Understanding of EDW system of business and creating High level design document and low level implementation document
• Understanding of Big Data Lake system of business and creating High level design document and low level implementation document
• Designing Big data pipeline for Data Ingestion, Storage, Processing & Consumption
Role: Project Manager
Experience: 8-10 Years
Location: Mumbai
Company Profile:
Exponentia.ai is an AI tech organization with a presence across India, Singapore, the Middle East, and the UK. We are an innovative and disruptive organization, working on cutting-edge technology to help our clients transform into the enterprises of the future. We provide artificial intelligence-based products/platforms capable of automated cognitive decision-making to improve productivity, quality, and economics of the underlying business processes. Currently, we are rapidly expanding across machine learning, Data Engineering and Analytics functions. Exponentia.ai has developed long-term relationships with world-class clients such as PayPal, PayU, SBI Group, HDFC Life, Kotak Securities, Wockhardt and Adani Group amongst others.
One of the top partners of Data bricks, Azure, Cloudera (leading analytics player) and Qlik (leader in BI technologies), Exponentia.ai has recently been awarded the ‘Innovation Partner Award’ by Qlik and "Excellence in Business Process Automation Award" (IMEA) by Automation Anywhere.
Get to know more about us at http://www.exponentia.ai and https://in.linkedin.com/company/exponentiaai
Role Overview:
· Project manager shall be responsible to oversee and take responsibility for the successful delivery of a range of projects in Business Intelligence, Data warehousing, and Analytics/AI-ML.
· Project manager is expected to manage the project and lead the teams of BI engineers, data engineers, data scientists and application developers.
Job Responsibilities:
· Efforts estimation, creating a project plan, planning milestones, activities and tracking the progress.
· Identify risks and issues. Come up with a mitigation plan.
· Status reporting to both internal and external stakeholders.
· Communicate with all stakeholders.
· Manage end-to-end project lifecycle - requirements gathering, design, development, testing and go-live.
· Manage end-to-end BI or data warehouse projects.
· Must have experience in running Agile-based project development.
Technical skills
· Experience in Business Intelligence Data warehousing or Analytics projects.
· Understand data lake and data warehouse solutions including ETL pipelines.
· Good to have - Knowledge of Azure blob storage, azure data factory and Synapse analytics.
· Good to have - Knowledge of Qlik Sense or Power BI
· Good to have - Certified in PMP/Prince 2 / Agile Project management.
· Excellent written and verbal communication skills.
Education:
MBA, B.E. or B. Tech. or MCA degree
Designation: Senior - DBA
Experience: 6-9 years
CTC: INR 17-20 LPA
Night Allowance: INR 800/Night
Location: Hyderabad,Hybrid
Notice Period: NA
Shift Timing : 6:30 pm to 3:30 am
Openings: 3
Roles and Responsibilities:
As a Senior Database Administrator is responsible for the physical design development
administration and optimization of properly engineered database systems to meet agreed
business and technical requirements.
The candidate will work as part of but not limited to the Onsite/Offsite DBA
group-Administration and management of databases in Dev Stage and Production
environments
Performance tuning of database schema stored procedures etc.
Providing technical input on the setup configuration of database servers and SAN disk
subsystem on all database servers.
Troubleshooting and handling all database related issues and tracking them through to
resolution.
Pro-active monitoring of databases both from a performance and capacity management
perspective.
Performing database maintenance activities such as backup/recovery rebuilding and
reorganizing indexes.
Ensuring that all database releases are properly assessed and measured from a
functionality and performance perspective.
Ensuring that all databases are up to date with the latest service packs patches &
security fixes.
Take ownership and ensure high quality timely delivery of projects on hand.
Collaborate with application/database developers quality assurance and
operations/support staff
Will help manage large high transaction rate SQL Server production
Eligibility:
Bachelors/Master Degree (BE/BTech/MCA/MTect/MS)
6 - 8 years of solid experience in SQL Server 2016/2019 Database administration and
maintenance on Azure and AWS cloud.
Experience handling and managing large SQL Server databases in a real time production
environment with sizes greater than 200+ GB
Experience in troubleshooting and resolving database integrity issues performance
issues blocking/deadlocking issues connectivity issues data replication issues etc.
Experience on Configuration Trouble shoot on SQL Server HA
Ability to detect and troubleshoot database related CPUmemoryI/Odisk space and other
resource contention issues.
Experience with database maintenance activities such as backup/recovery & capacity
monitoring/management and Azure Backup Services.
Experience with HA/Failover technologies such as Clustering SAN Replication Log
shipping & mirroring.
Experience collaborating with development teams on physical database design activities
and performance tuning.
Experience in managing and making software deployments/changes in real time
production environments.
Ability to work on multiple projects at one time with minimal supervision and ensure high
quality timely delivery.
Knowledge on tools like SQL Lite speed SQL Diagnostic Manager App Dynamics.
Strong understanding of Data Warehousing concepts and SQL server Architecture
Certified DBA Proficient in TSQL Proficient in the various Storage technologies such as
ASM SAN NAS RAID Multi patching
Strong analytical and problem solving skills Proactive independent and proven ability to
work under tight target and pressure
Experience working in a highly regulated environment such as a financial services
institutions
Expertise in SSIS SSRS
Skills:
SSIS
SSRS
• Work with various stakeholders, understand requirements, and build solutions/data pipelines
that address the needs at scale
• Bring key workloads to the clients’ Snowflake environment using scalable, reusable data
ingestion and processing frameworks to transform a variety of datasets
• Apply best practices for Snowflake architecture, ELT and data models
Skills - 50% of below:
• A passion for all things data; understanding how to work with it at scale, and more importantly,
knowing how to get the most out of it
• Good understanding of native Snowflake capabilities like data ingestion, data sharing, zero-copy
cloning, tasks, Snowpipe etc
• Expertise in data modeling, with a good understanding of modeling approaches like Star
schema and/or Data Vault
• Experience in automating deployments
• Experience writing code in Python, Scala or Java or PHP
• Experience in ETL/ELT either via a code-first approach or using low-code tools like AWS Glue,
Appflow, Informatica, Talend, Matillion, Fivetran etc
• Experience in one or more of the AWS especially in relation to integration with Snowflake
• Familiarity with data visualization tools like Tableau or PowerBI or Domo or any similar tool
• Experience with Data Virtualization tools like Trino, Starburst, Denodo, Data Virtuality, Dremio
etc.
• Certified SnowPro Advanced: Data Engineer is a must.
In this role, you will:
As part of a team focused on the preserving the customer experience across the organization, this Analytic Consultant will be responsible for:
- Understand business objectives and provide credible challenge to analysis requirements.
- Verify sound analysis practices and data decisions were leveraged throughout planning and data sourcing phases.
- Conduct in-depth research within complex data environments to identify data integrity issues and propose solutions to improve analysis accuracy.
- Applying critical evaluation to challenge assumptions, formulate a defendable hypothesis, and ensuring high quality analysis results.
- Ensure adherence to data management/ data governance regulations and policies.
- Performing and testing highly complex data analytics for customer remediation.
- Designing analysis projects flow and documentation that is structured for consistency, easy to understand, and to be offered to multiple levels of reviewers, partners, and regulatory agents demonstrating research and analysis completed.
- Investigate and ensure data integrity from multiple sources.
- Ensure data recommended and used is the best “source of truth”.
- Applies knowledge of business, customers, and products to synthesize data to 'form a story' and align information to contrast/compare to industry perspective. Data involved typically very large, structured or unstructured, and from multiple sources.
- Must have a strong attention to detail and be able to meet high quality standards consistently.
- Other duties as assigned by manager.
- Willing to assist on high priority work outside of regular business hours or weekend as needed.
Essential Qualifications:
- Around 5+ years in similar analytics roles
- Bachelors, M.A./M.Sc. College Degree or Higher in applied mathematics, statistics, engineering, physics, accounting, finance, economics, econometrics, computer sciences, or business/social and behavioral sciences with a quantitative emphasis.
- Preferred programming knowledge SQL/SAS
- Knowledge of PVSI, Non-Lending, Student Loans, Small Business and Personal Lines and Loans is a plus.
- Strong experience with data integration, database structures and data warehouses.
- Persuasive written and verbal communication skills.
Desired Qualifications:
- Certifications in Data Science, or BI Reporting tools.
- Ability to prioritize work, meet deadlines, achieve goals and work under pressure in a dynamic and complex environment – Soft Skills.
- Detail oriented, results driven, and has the ability to navigate in a quickly changing and high demand environment while balancing multiple priorities.
- Ability to research and report on a variety of issues using problem solving skills.
- Ability to act with integrity and a high level of professionalism with all levels of team members and management.
- Ability to make timely and independent judgment decisions while working in a fast-paced and results-driven environment.
- Ability to learn the business aspects quickly, multitask and prioritize between projects.
- Exhibits appropriate sense of urgency in managing responsibilities.
- Ability to accurately process high volumes of work within established deadlines.
- Available to flex schedule periodically based on business need.
- Demonstrate strong negotiation, communication & presentation skills.
- Demonstrates a high degree of reliability, integrity and trustworthiness.
- Takes ownership of assignments and helps drive assignments of the team.
- Dedicated, enthusiastic, driven and performance-oriented; possesses a strong work ethic and good team player.
- Be proactive and get engaged in organizational initiatives.
at Softobiz Technologies Private limited
Responsibilities
- Design and implement Azure BI infrastructure, ensure overall quality of delivered solution
- Develop analytical & reporting tools, promote and drive adoption of developed BI solutions
- Actively participate in BI community
- Establish and enforce technical standards and documentation
- Participate in daily scrums
- Record progress daily in assigned Devops items
Ideal Candidates should have
- 5 + years of experience in a similar senior business intelligence development position
- To be successful in the role you will require a high level of expertise across all facets of the Microsoft BI stack and prior experience in designing and developing well-performing data warehouse solutions
- Demonstrated experience using development tools such as Azure SQL database, Azure Data Factory, Azure Data Lake, Azure Synapse, and Azure DevOps.
- Experience with development methodologies including Agile, DevOps, and CICD patterns
- Strong oral and written communication skills in English
- Ability and willingness to learn quickly and continuously
- Bachelor's Degree in computer science
Job Title: Data Warehouse/Redshift Admin
Location: Remote
Job Description
AWS Redshift Cluster Planning
AWS Redshift Cluster Maintenance
AWS Redshift Cluster Security
AWS Redshift Cluster monitoring.
Experience managing day to day operations of provisioning, maintaining backups, DR and monitoring of AWS RedShift/RDS clusters
Hands-on experience with Query Tuning in high concurrency environment
Expertise setting up and managing AWS Redshift
AWS certifications Preferred (AWS Certified SysOps Administrator)
• S/he possesses a wide exposure to complete lifecycle of data starting from creation to consumption
• S/he has in the past built repeatable tools / data-models to solve specific business problems
• S/he should have hand-on experience of having worked on projects (either as a consultant or with in a company) that needed them to
o Provide consultation to senior client personnel o Implement and enhance data warehouses or data lakes.
o Worked with business teams or was a part of the team that implemented process re-engineering driven by data analytics/insights
• Should have deep appreciation of how data can be used in decision-making
• Should have perspective on newer ways of solving business problems. E.g. external data, innovative techniques, newer technology
• S/he must have a solution-creation mindset.
Ability to design and enhance scalable data platforms to address the business need
• Working experience on data engineering tool for one or more cloud platforms -Snowflake, AWS/Azure/GCP
• Engage with technology teams from Tredence and Clients to create last mile connectivity of the solutions
o Should have experience of working with technology teams
• Demonstrated ability in thought leadership – Articles/White Papers/Interviews
Mandatory Skills Program Management, Data Warehouse, Data Lake, Analytics, Cloud Platform
A proficient, independent contributor that assists in technical design, development, implementation, and support of data pipelines; beginning to invest in less-experienced engineers.
Responsibilities:
- Design, Create and maintain on premise and cloud based data integration pipelines.
- Assemble large, complex data sets that meet functional/non functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
- Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Create data pipelines to enable BI, Analytics and Data Science teams that assist them in building and optimizing their systems
- Assists in the onboarding, training and development of team members.
- Reviews code changes and pull requests for standardization and best practices
- Evolve existing development to be automated, scalable, resilient, self-serve platforms
- Assist the team in the design and requirements gathering for technical and non technical work to drive the direction of projects
Technical & Business Expertise:
-Hands on integration experience in SSIS/Mulesoft
- Hands on experience Azure Synapse
- Proven advanced level of writing database experience in SQL Server
- Proven advanced level of understanding about Data Lake
- Proven intermediate level of writing Python or similar programming language
- Intermediate understanding of Cloud Platforms (GCP)
- Intermediate understanding of Data Warehousing
- Advanced Understanding of Source Control (Github)
- Manages the delivery of large, complex Data Science projects using appropriate frameworks and collaborating with stake holders to manage scope and risk. Help the AI/ML Solution
- Analyst to build solution as per customer need on our platform Newgen AI Cloud. Drives profitability and continued success by managing service quality and cost and leading delivery. Proactively support sales through innovative solutions and delivery excellence.
Work location: Gurugram
Key Responsibilities:
1 Collaborate/contribute to all project phases, technical know to design, develop solutions and deploy at customer end.
2 End-to-end implementations i.e. gathering requirements, analysing, designing, coding, deployment to Production
3 Client facing role talking to client on regular basis to get requirement clarification
4. Lead the team
Core Tech Skills: Azure, Cloud Computing, Java/Scala, Python, Design Patterns and fair knowledge of Data Science. Fair Knowledge of Data Lake/DWH
Educational Qualification: Engineering graduate preferably Computer since graduate
Technical-Requirements:
- Bachelor's Degree in Computer Science or a related technical field, and solid years of relevant experience.
- A strong grasp of SQL/Presto and at least one scripting (Python, preferable) or programming language.
- Experience with an enterprise class BI tools and it's auditing along with automations using REST API's.
- Experience with reporting tools – QuickSight (preferred, at least 2 years hands on).
- Tableau/Looker (both or anyone would suffice with at least 5+ years of hands on).
- 5+ years of experience with and detailed knowledge of data warehouse technical architectures, data modelling, infrastructure components, ETL/ ELT and reporting/analytic tools and environments, data structures and hands-on SQL coding.
- 5+ years of demonstrated quantitative and qualitative business intelligence.
- Experience with significant product analysis based business impact.
- 4+ years of large IT project delivery for BI oriented projects using agile framework.
- 2+ years of working with very large data warehousing environment.
- Experience in designing and delivering cross functional custom reporting solutions.
- Excellent oral and written communication skills including the ability to communicate effectively with both technical and non-technical stakeholders.
- Proven ability to meet tight deadlines, multi-task, and prioritize workload.
- A work ethic based on a strong desire to exceed expectations.
- Strong analytical and challenge process skills.
XpressBees – a logistics company started in 2015 – is amongst the fastest growing
companies of its sector. While we started off rather humbly in the space of
ecommerce B2C logistics, the last 5 years have seen us steadily progress towards
expanding our presence. Our vision to evolve into a strong full-service logistics
organization reflects itself in our new lines of business like 3PL, B2B Xpress and cross
border operations. Our strong domain expertise and constant focus on meaningful
innovation have helped us rapidly evolve as the most trusted logistics partner of
India. We have progressively carved our way towards best-in-class technology
platforms, an extensive network reach, and a seamless last mile management
system. While on this aggressive growth path, we seek to become the one-stop-shop
for end-to-end logistics solutions. Our big focus areas for the very near future
include strengthening our presence as service providers of choice and leveraging the
power of technology to improve efficiencies for our clients.
Job Profile
As a Lead Data Engineer in the Data Platform Team at XpressBees, you will build the data platform
and infrastructure to support high quality and agile decision-making in our supply chain and logistics
workflows.
You will define the way we collect and operationalize data (structured / unstructured), and
build production pipelines for our machine learning models, and (RT, NRT, Batch) reporting &
dashboarding requirements. As a Senior Data Engineer in the XB Data Platform Team, you will use
your experience with modern cloud and data frameworks to build products (with storage and serving
systems)
that drive optimisation and resilience in the supply chain via data visibility, intelligent decision making,
insights, anomaly detection and prediction.
What You Will Do
• Design and develop data platform and data pipelines for reporting, dashboarding and
machine learning models. These pipelines would productionize machine learning models
and integrate with agent review tools.
• Meet the data completeness, correction and freshness requirements.
• Evaluate and identify the data store and data streaming technology choices.
• Lead the design of the logical model and implement the physical model to support
business needs. Come up with logical and physical database design across platforms (MPP,
MR, Hive/PIG) which are optimal physical designs for different use cases (structured/semi
structured). Envision & implement the optimal data modelling, physical design,
performance optimization technique/approach required for the problem.
• Support your colleagues by reviewing code and designs.
• Diagnose and solve issues in our existing data pipelines and envision and build their
successors.
Qualifications & Experience relevant for the role
• A bachelor's degree in Computer Science or related field with 6 to 9 years of technology
experience.
• Knowledge of Relational and NoSQL data stores, stream processing and micro-batching to
make technology & design choices.
• Strong experience in System Integration, Application Development, ETL, Data-Platform
projects. Talented across technologies used in the enterprise space.
• Software development experience using:
• Expertise in relational and dimensional modelling
• Exposure across all the SDLC process
• Experience in cloud architecture (AWS)
• Proven track record in keeping existing technical skills and developing new ones, so that
you can make strong contributions to deep architecture discussions around systems and
applications in the cloud ( AWS).
• Characteristics of a forward thinker and self-starter that flourishes with new challenges
and adapts quickly to learning new knowledge
• Ability to work with a cross functional teams of consulting professionals across multiple
projects.
• Knack for helping an organization to understand application architectures and integration
approaches, to architect advanced cloud-based solutions, and to help launch the build-out
of those systems
• Passion for educating, training, designing, and building end-to-end systems.
Rapidly growing fintech SaaS firm that propels business grow
What is the role?
We are looking for a Senior Performance Marketing manager (PPC/SEM) who will be responsible for paid advertising for this company across Google Ads, Social ads and other demand-gen channels.
Our ideal candidate has a blend of analytical and creative mindset, passionate about driving metrics while also being very sensitive to brand and user experience, and distinctive at operating in highly collaborative and cross-functional settings. This role partners closely with our Sales, product, design, and broader marketing teams.
Key responsibilities
- Strategise, execute, monitor, and manage campaigns across multiple platforms such as Google Ads (incl. Search, Display & Youtube),, Facebook Ads & Linkedin Ads.
- Oversee growth in performance campaigns to meet the brand's business goals and strategies.
- Should have hands-on experience on managing landing pages, keyword plans, ad copies, display ads etc.
- Should have extremely good analytical skills to figure out the signals in the campaigns and optimize the campaigns using these insights.
- Implement ongoing A/B and user experience testing for ads, quality score, placements, dynamic landing pages and measure their effectiveness.
- Monitor campaign performance & budget pacing on a day-to-day basis.
- Measure the Campaign performance parameters methodically, analyze campaign performance, compile and present detailed reports with proactive insights.
- Be informed on the latest trends, best practices, and standards in online advertising across demand-gen channels.
- Perform Media Mix modeling. Design, develop, and monitor other digital media buying campaigns.
What are we looking for?
- 5-10 years of pure PPC experience, preferably in a SaaS company managing annual budgets of more than 2 mn USD.
- Highly comfortable with Google Ads Editor, Linkedin Ads, Facebook Business Manager & such.
- Strong working knowledge of PPC Automations/Rules/Scripts and best practices with the ability to analyze Campaign metrics on Excel/Data Studio and optimize campaigns with insights.
- Experience working with Ad channel APIs and other data APIs to deep-dive into metrics & make data-informed optimisations.
- [Good to have] Working knowledge of SQL, Data Warehouses (Bigquery), Data connectors/pipelines, blends/joints (for blending multiple data sources) etc.
- In-depth experience with GA4. Clear understanding of Web Analytics.
- Comfortable writing and editing content for ad copies, landing page snippets, descriptions, etc.
- Experience with running Campaigns for US, European, and the global markets.
What can you look for?
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact, and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at this company
We are
It is a rapidly growing fintech SaaS firm that propels business growth while focusing on human motivation. Backed by Giift and Apis Partners Growth Fund II, Company offers a suite of three products - Plum, Empuls, and Compass. The company works with more than 2000 clients across 10+ countries and over 2.5 million users. Headquartered in Bengaluru, Company is a 300+ strong team with four global offices in Dubai, San Francisco, Dublin, Singapore, New Delhi.
Way forward
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.
- 5+ years of experience in a Data Engineering role on cloud environment
- Must have good experience in Scala/PySpark (preferably on data-bricks environment)
- Extensive experience with Transact-SQL.
- Experience in Data-bricks/Spark.
- Strong experience in Dataware house projects
- Expertise in database development projects with ETL processes.
- Manage and maintain data engineering pipelines
- Develop batch processing, streaming and integration solutions
- Experienced in building and operationalizing large-scale enterprise data solutions and applications
- Using one or more of Azure data and analytics services in combination with custom solutions
- Azure Data Lake, Azure SQL DW (Synapse), and SQL Database products or equivalent products from other cloud services providers
- In-depth understanding of data management (e. g. permissions, security, and monitoring).
- Cloud repositories for e.g. Azure GitHub, Git
- Experience in an agile environment (Prefer Azure DevOps).
Good to have
- Manage source data access security
- Automate Azure Data Factory pipelines
- Continuous Integration/Continuous deployment (CICD) pipelines, Source Repositories
- Experience in implementing and maintaining CICD pipelines
- Power BI understanding, Delta Lake house architecture
- Knowledge of software development best practices.
- Excellent analytical and organization skills.
- Effective working in a team as well as working independently.
- Strong written and verbal communication skills.
- Expertise in database development projects and ETL processes.
We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.
We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.
Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!
Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc.
Key Responsibilities
-
Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality
-
Work with the Office of the CTO as an active member of our architecture guild
-
Writing pipelines to consume the data from multiple sources
-
Writing a data transformation layer using DBT to transform millions of data into data warehouses.
-
Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities
-
Identify downstream implications of data loads/migration (e.g., data quality, regulatory)
What To Bring
-
5+ years of software development experience, a startup experience is a plus.
-
Past experience of working with Airflow and DBT is preferred
-
5+ years of experience working in any backend programming language.
-
Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL
-
Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)
-
Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects
-
Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure
-
Basic understanding of Kubernetes & docker is a must.
-
Experience in data processing (ETL, ELT) and/or cloud-based platforms
-
Working proficiency and communication skills in verbal and written English.
Synergetic IT Services India Pvt Ltd
2. Responsible for gathering system requirements working together with application architects
and owners
3. Responsible for generating scripts and templates required for the automatic provisioning of
resources
4. Discover standard cloud services offerings, install, and execute processes and standards for
optimal use of cloud service provider offerings
5. Incident Management on IaaS, PaaS, SaaS.
6. Responsible for debugging technical issues inside a complex stack involving virtualization,
containers, microservices, etc.
7. Collaborate with the engineering teams to enable their applications to run
on Cloud infrastructure.
8. Experience with OpenStack, Linux, Amazon Web Services, Microsoft Azure, DevOps, NoSQL
etc will be plus.
9. Design, implement, configure, and maintain various Azure IaaS, PaaS, SaaS services.
10. Deploy and maintain Azure IaaS Virtual Machines and Azure Application and Networking
Services.
11. Optimize Azure billing for cost/performance (VM optimization, reserved instances, etc.)
12. Implement, and fully document IT projects.
13. Identify improvements to IT documentation, network architecture, processes/procedures,
and tickets.
14. Research products and new technologies to increase efficiency of business and operations
15. Keep all tickets and projects updated and track time in a detailed format
16. Should be able to multi-task and work across a range of projects and issues with various
timelines and priorities
Technical:
• Minimum 1 year experience Azure and knowledge on Office365 services preferred.
• Formal education in IT preferred
• Experience with Managed Service business model a major plus
• Bachelor’s degree preferred
Job Title: Product Manager
Job Description
Bachelor or master’s degree in computer science or equivalent experience.
Worked as Product Owner before and took responsibility for a product or project delivery.
Well-versed with data warehouse modernization to Big Data and Cloud environments.
Good knowledge* of any of the Cloud (AWS/Azure/GCP) – Must Have
Practical experience with continuous integration and continuous delivery workflows.
Self-motivated with strong organizational/prioritization skills and ability to multi-task with close attention to detail.
Good communication skills
Experience in working within a distributed agile team
Experience in handling migration projects – Good to Have
*Data Ingestion, Processing, and Orchestration knowledge
Roles & Responsibilities
Responsible for coming up with innovative and novel ideas for the product.
Define product releases, features, and roadmap.
Collaborate with product teams on defining product objectives, including creating a product roadmap, delivery, market research, customer feedback, and stakeholder inputs.
Work with the Engineering teams to communicate release goals and be a part of the product lifecycle. Work closely with the UX and UI team to create the best user experience for the end customer.
Work with the Marketing team to define GTM activities.
Interface with Sales & Customer teams to identify customer needs and product gaps
Market and competition analysis activities.
Participate in the Agile ceremonies with the team, define epics, user stories, acceptance criteria
Ensure product usability from the end-user perspective
Mandatory Skills
Product Management, DWH, Big Data
A fast-growing SaaS commerce company permanent WFH & Office
What is the role?
You will be responsible for building and maintaining highly scalable data infrastructure for our cloud-hosted SAAS product. You will work closely with the Product Managers and Technical team to define and implement data pipelines for customer-facing and internal reports.
Key Responsibilities
- Understanding of the business process and requirements thoroughly and convert them to the reports.
- Should be able to suggest the right way to the users of the reports.
- Developing, maintaining, and managing advanced reporting, analytics, dashboards and other BI solutions.
What are we looking for?
An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of it. We are open to promising candidates who are passionate about their work and are team players.
- Education - BE/MCA or equivalent
- Good experience in working on the performance side of the reports.
- Expert level knowlege of querying in any RDBMS and preferrably in Redshift or Postgress
- Expert level knowledge of Datawarehousing concepts
- Advanced level scripting to create calculated fields, sets, parameters, etc
- Degree in mathematics, computer science, information systems, or related field.
- 5-7 years of exclusive experience Tableau and Dataware house.
Whom will you work with?
You will work with a top-notch tech team, working closely with the CTO and product team.
What can you look for?
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality on content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits.
We are
A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, Mumbai, SF, Dubai, Singapore and Dublin. We have three products in our portfolio: Plum, Empuls and Compass. Works with over 1000 global clients. We help our clients in engaging and motivating their employees, sales teams, channel partners or consumers for better business results.
* Formulates and recommends standards for achieving maximum performance
and efficiency of the DW ecosystem.
* Participates in the Pre-sales activities for solutions of various customer
problem-statement/situations.
* Develop business cases and ROI for the customer/clients.
* Interview stakeholders and develop BI roadmap for success given project
prioritization
* Evangelize self-service BI and visual discovery while helping to automate any
manual process at the client site.
* Work closely with the Engineering Manager to ensure prioritization of
customer deliverables.
* Champion data quality, integrity, and reliability throughout the organization by
designing and promoting best practices.
*Implementation 20%
* Help DW/DE team members with issues needing technical expertise or
complex systems and/or programming knowledge.
* Provide on-the-job training for new or less experienced team members.
* Develop a technical excellence team
Requirements
- experience designing business intelligence solutions
- experience with ETL Process, Data warehouse architecture
- experience with Azure Data services i.e., ADF, ADLS Gen 2, Azure SQL dB,
Synapse, Azure Databricks, and Power BI
- Good analytical and problem-solving skills
- Fluent in relational database concepts and flat file processing concepts
- Must be knowledgeable in software development lifecycles/methodologies
Job Description:
We are looking for a Big Data Engineer who have worked across the entire ETL stack. Someone who has ingested data in a batch and live stream format, transformed large volumes of daily and built Data-warehouse to store the transformed data and has integrated different visualization dashboards and applications with the data stores. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.
Responsibilities:
- Develop, test, and implement data solutions based on functional / non-functional business requirements.
- You would be required to code in Scala and PySpark daily on Cloud as well as on-prem infrastructure
- Build Data Models to store the data in a most optimized manner
- Identify, design, and implement process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Implementing the ETL process and optimal data pipeline architecture
- Monitoring performance and advising any necessary infrastructure changes.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Proactively identify potential production issues and recommend and implement solutions
- Must be able to write quality code and build secure, highly available systems.
- Create design documents that describe the functionality, capacity, architecture, and process.
- Review peer-codes and pipelines before deploying to Production for optimization issues and code standards
Skill Sets:
- Good understanding of optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ‘big data’ technologies.
- Proficient understanding of distributed computing principles
- Experience in working with batch processing/ real-time systems using various open-source technologies like NoSQL, Spark, Pig, Hive, Apache Airflow.
- Implemented complex projects dealing with the considerable data size (PB).
- Optimization techniques (performance, scalability, monitoring, etc.)
- Experience with integration of data from multiple data sources
- Experience with NoSQL databases, such as HBase, Cassandra, MongoDB, etc.,
- Knowledge of various ETL techniques and frameworks, such as Flume
- Experience with various messaging systems, such as Kafka or RabbitMQ
- Creation of DAGs for data engineering
- Expert at Python /Scala programming, especially for data engineering/ ETL purposes
Senior Software Engineer - Data Team
We are seeking a highly motivated Senior Software Engineer with hands-on experience and build scalable, extensible data solutions, identifying and addressing performance bottlenecks, collaborating with other team members, and implementing best practices for data engineering. Our engineering process is fully agile, and has a really fast release cycle - which keeps our environment very energetic and fun.
What you'll do:
Design and development of scalable applications.
Work with Product Management teams to get maximum value out of existing data.
Contribute to continual improvement by suggesting improvements to the software system.
Ensure high scalability and performance
You will advocate for good, clean, well documented and performing code; follow standards and best practices.
We'd love for you to have:
Education: Bachelor/Master Degree in Computer Science.
Experience: 3-5 years of relevant experience in BI/DW with hands-on coding experience.
Mandatory Skills
Strong in problem-solving
Strong experience with Big Data technologies, Hive, Hadoop, Impala, Hbase, Kafka, Spark
Strong experience with orchestration framework like Apache oozie, Airflow
Strong experience of Data Engineering
Strong experience with Database and Data Warehousing technologies and ability to understand complex design, system architecture
Experience with the full software development lifecycle, design, develop, review, debug, document, and deliver (especially in a multi-location organization)
Good knowledge of Java
Desired Skills
Experience with Python
Experience with reporting tools like Tableau, QlikView
Experience of Git and CI-CD pipeline
Awareness of cloud platform ex:- AWS
Excellent communication skills with team members, Business owners, across teams
Be able to work in a challenging, dynamic environment and meet tight deadlines
4-6 years of total experience in data warehousing and business intelligence
3+ years of solid Power BI experience (Power Query, M-Query, DAX, Aggregates)
2 years’ experience building Power BI using cloud data (Snowflake, Azure Synapse, SQL DB, data lake)
Strong experience building visually appealing UI/UX in Power BI
Understand how to design Power BI solutions for performance (composite models, incremental refresh, analysis services)
Experience building Power BI using large data in direct query mode
Expert SQL background (query building, stored procedure, optimizing performance)
A fast-growing SaaS commerce company permanent WFH & Office
Job Description :
A candidate who has a strong background in the design and implementation of scalable architecture and a good understanding of Algorithms, Data structures, and design patterns. Candidate must be ready to learn new tools, languages, and technologies
Skills :
Microservices, MySQL/Postgres, Kafka/Message Queues, Elasticsearch, Data pipelines, AWS Cloud, Clickhouse/Redshift
What you need to succeed in this role
- Minimum 6 years of experience
- Good understanding of various database types: RDBMS, NoSQL, GraphDB, etc
- Ability to build highly stable reliable APIs and backend services.
- Should be familiar with distributed, high availability database systems
- Experience with queuing systems like Kafka
- Hands-on in cloud infrastructure AWS/GCP/Azure
- Highly plus if know one or more of the following: Confluent ksql, Kafka connect, Kafka streams
- Hands-on experience with data warehouse/OLAP systems such as Redshift, click house and added plus.
- Good communication and interpersonal skills
Benefits of joining us
- Ability to join a small and growing team, and work with some of the coolest people you've ever met
- Opportunity to make an impact, and leave your mark on this organization.
- Competitive compensation, with the ability to shape your own career trajectory
- Go Extra Mile with Learning and Development
What can you look for?
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality on content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at Xoxoday.
We are
A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, Mumbai, SF, Dubai, Singapore and Dublin. We have three products in our portfolio: Plum, Empuls and Compass. Works with over 1000 global clients. We help our clients in engaging and motivating their employees, sales teams, channel partners or consumers for better business results.
Job Description - Sr Azure Data Engineer
Roles & Responsibilities:
- Hands-on programming in C# / .Net,
- Develop serverless applications using Azure Function Apps.
- Writing complex SQL Queries, Stored procedures, and Views.
- Creating Data processing pipeline(s).
- Develop / Manage large-scale Data Warehousing and Data processing solutions.
- Provide clean, usable data and recommend data efficiency, quality, and data integrity.
Skills
- Should have working experience on C# /.Net.
- Proficient with writing SQL queries, Stored Procedures, and Views
- Should have worked on Azure Cloud Stack.
- Should have working experience ofin developing serverless code.
- Must have MANDATORILY worked on Azure Data Factory.
Experience
- 4+ years of relevant experience
Greetings! We are looking for Product Manager for our Data modernization product. We need a resource with good knowledge on Big Data/DWH. should have strong Stakeholders management and Presentation skills
The role provides L2 and L3 support and IS services to F&PA community using the Cognos Controller Applications
- Problem determination / troubleshooting and root cause analysis skills for our IBM Cognos Controller Cloud and On-Premises offerings
- Create or enhance knowledge assets for our Knowledge Base (how-to guides, technical notes or similar).
- Contribute to key operational metrics: for example: NPS, initial response times, backlog management, Time to Resolution
Technical expertise:
- Previous experience with IBM Cognos Controller, Planning Analytics, or equivalent financial consolidation software (e.g. Anaplan, Tagetek, OneStream, Vena Financial Close Management, Oracle Financial Consolidation, )
- Web tier technologies
- Network administration
- Web applications
- Databases (SQL Server Oracle, DB2)
- Strong troubleshooting communication skills
- Demonstrated organization and time management skills
- Demonstrated verbal and written communication skills
- Must be self-motivated and disciplined
- Ability to recognize and prioritize critical tasks independently
1. Strong in IBM Cognos Controller 10x expertise 2. Strong SQL knowledge (Oracle, MS SQL Server) 3. Good in Data-warehousing concepts with cognos schema 4. Preferred Knowledge on Cognos finance products - Planning/TM1 5. Excellent communication skills
cognos Controller 10.4.2 -On premise experience - this is the keyword for the search for Cognos Controller.
A Pre-series A funded FinTech Company
Responsibilities:
- Ensure and own Data integrity across distributed systems.
- Extract, Transform and Load data from multiple systems for reporting into BI platform.
- Create Data Sets and Data models to build intelligence upon.
- Develop and own various integration tools and data points.
- Hands-on development and/or design within the project in order to maintain timelines.
- Work closely with the Project manager to deliver on business requirements OTIF (on time in full)
- Understand the cross-functional business data points thoroughly and be SPOC for all data-related queries.
- Work with both Web Analytics and Backend Data analytics.
- Support the rest of the BI team in generating reports and analysis
- Quickly learn and use Bespoke & third party SaaS reporting tools with little documentation.
- Assist in presenting demos and preparing materials for Leadership.
Requirements:
- Strong experience in Datawarehouse modeling techniques and SQL queries
- A good understanding of designing, developing, deploying, and maintaining Power BI report solutions
- Ability to create KPIs, visualizations, reports, and dashboards based on business requirement
- Knowledge and experience in prototyping, designing, and requirement analysis
- Be able to implement row-level security on data and understand application security layer models in Power BI
- Proficiency in making DAX queries in Power BI desktop.
- Expertise in using advanced level calculations on data sets
- Experience in the Fintech domain and stakeholder management.
Purpose of Job:
Responsible for drawing insights from many sources of data to answer important business
questions and help the organization make better use of data in their daily activities.
Job Responsibilities:
We are looking for a smart and experienced Data Engineer 1 who can work with a senior
manager to
⮚ Build DevOps solutions and CICD pipelines for code deployment
⮚ Build unit test cases for APIs and Code in Python
⮚ Manage AWS resources including EC2, RDS, Cloud Watch, Amazon Aurora etc.
⮚ Build and deliver high quality data architecture and pipelines to support business
and reporting needs
⮚ Deliver on data architecture projects and implementation of next generation BI
solutions
⮚ Interface with other teams to extract, transform, and load data from a wide variety
of data sources
Qualifications:
Education: MS/MTech/Btech graduates or equivalent with focus on data science and
quantitative fields (CS, Eng, Math, Eco)
Work Experience: Proven 1+ years of experience in data mining (SQL, ETL, data
warehouse, etc.) and using SQL databases
Skills
Technical Skills
⮚ Proficient in Python and SQL. Familiarity with statistics or analytical techniques
⮚ Data Warehousing Experience with Big Data Technologies (Hadoop, Hive,
Hbase, Pig, Spark, etc.)
⮚ Working knowledge of tools and utilities - AWS, DevOps with Git, Selenium,
Postman, Airflow, PySpark
Soft Skills
⮚ Deep Curiosity and Humility
⮚ Excellent storyteller and communicator
⮚ Design Thinking
Responsibilities:
- Take the lead in building tools to increase the productivity of our business and product teams
- Build client facing portal to support the submission and integration of games from external developers
- Collaborate with teams in a range of disciplines
- Clearly communicate challenges and progress to stakeholders
- Adopt and learn new technologies
Basic Qualifications:
- 5+ years professional experience in software development and a BS or MS in Computer Science or related field
- Solid understanding of Javascript, NodeJS, PHP, SQL, C#
- Strong knowledge of AWS Cloud architecture, services, and DevOps
- Adhere to software design patterns and have knowledge of algorithms
- Experience with databases and database programming (MySQL, NoSQL, etc.)
- Comfortable understanding and implementing REST APIs, knowledge of AJAX patterns and principles
- In-depth knowledge of modern HTML/CSS
- Strong understanding of web architecture, security, cookies, reverse-proxies
- Have a solid knowledge of web debugging tools (Firebug or Chrome Developer Console)
Pluses:
- Bonus points for data warehouse experience (Snowflake, Redshift)
- Experience in game programming and Unity development
- Knowledge of unit testing and test driven development
- A passion for games (of any type) as well as a passion for code
- Knowledge of mobile gaming metrics and the mobile gaming industry
Perks:
- Free medical, dental, and vision insurance
- Work from home stipend on each paycheck
- Competitive Salary
- Flexible Time Off - work hard and take time when you need it
- Interested? Send us your resume and let's talk!
Job Description
Job Description SQL DBA - Trainee Analyst/Engineer
Experience : 0 to 1 Years
No.of Positions: 2
Job Location: Bangalore
Notice Period: Immediate / Max 15 Days
The candidate should have strong SQL knowledge, Here are few points
- Implement and maintain the database design
- Create database objects (tables, indexes, etc.)
- Write database procedures, functions, and triggers
Good soft skills are a must (written and verbal communication)
Good team player
Ability to work in a 24x7 support model (rotation basis)
Strong fundamentals in Algorithms, OOPs and Data Structure
Should be flexible to support multiple IT platform
Analytical Thinking
Additional Information :
Functional Area: IT Software - DBA, Datawarehousing
Role Category: Admin/ Maintenance/ Security/ Datawarehousing
Role: DBA
Education :
B.Tech/ B.E
Skills
SQL DBA, IMPLEMENTATION, SQL, DBMS, DATA WAREHOUSING
Job Location: Chennai
Job Summary
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery
•
Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Required:
1) WebFOCUS BI Reporting
2) WebFOCUS Adminstration
3) Sybase or Oracle or SQL Server or Snowflake
4) DWH Skills
Nice to have:
1) Experience in SAP BO / Crystal report / SSRS / Power BI
2) Experience in Informix
3) Experience in ETL
Responsibilities:
• Technical knowledge regarding best practices of BI development / integration.
• Candidate must understand business processes, be a detailed-oriented person and quickly grasp new concepts.
• Additionally the candidate will have strong presentation, interpersonal, software development and work management skills.
• Strong Advanced SQL programming skills are required
• Proficient in MS Word, Excel, Access, and PowerPoint
• Experience working with one or more BI Reporting tools as Analyst/Developer.
• Knowledge of data mining techniques and procedures and knowing when their use is appropriate
• Ability to present complex information in an understandable and compelling manner.
• Experience converting reports from one reporting tool to another
A global provider of Business Process Management company
Power BI Developer
Senior visualization engineer with 5 years’ experience in Power BI to develop and deliver solutions that enable delivery of information to audiences in support of key business processes. In addition, Hands-on experience on Azure data services like ADF and databricks is a must.
Ensure code and design quality through execution of test plans and assist in development of standards & guidelines working closely with internal and external design, business, and technical counterparts.
Candidates should have worked in agile development environments.
Desired Competencies:
- Should have minimum of 3 years project experience using Power BI on Azure stack.
- Should have good understanding and working knowledge of Data Warehouse and Data Modelling.
- Good hands-on experience of Power BI
- Hands-on experience T-SQL/ DAX/ MDX/ SSIS
- Data Warehousing on SQL Server (preferably 2016)
- Experience in Azure Data Services – ADF, DataBricks & PySpark
- Manage own workload with minimum supervision.
- Take responsibility of projects or issues assigned to them
- Be personable, flexible and a team player
- Good written and verbal communications
- Have a strong personality who will be able to operate directly with users
company operates on a software as a service-based (SaaS) mod
according to our company’s needs. You will be responsible for planning, developing,
testing, improving and maintaining new and existing databases to help users retrieve
data effectively. As part of our IT team, you will work closely with developers to ensure
system consistency. You will also collaborate with administrators and clients to
provide technical support and identify new requirements. Communication and
organization skills are keys for this position, along with a problem-solution attitude.
You get to work with some of the best minds in the industry at a place where
opportunity lurks everywhere and in everything.
Responsibilities
Your responsibilities are as follows.
• Working cross functional teams to develop robust solutions aligned with the
business needs
• Maintaining communication, providing regular updates to the development
team ensuring solutions provided are fit for purpose
• Training other developers in the team on best practices and technologies
• Troubleshooting issues in the production environment understanding the root
cause and developing robust solutions
• Developing, implement, maintain and solutions that are both reliable and
scalable
• Capture data analysis requirements effectively and represent them formerly
and visually through our data models.
• Maintaining effective database performance by identifying and resolving
production and application development problems
• Optimise the integration and installation of new releases
• Monitoring system performance, test, troubleshoot and integrating new
features
• Proactively recommending solutions to improve new and existing database
systems
• Providing database support by coding utilities, resolving user questions and
problems
• Ensuring compliance to database implementation procedures
• Performing code and design reviews as per the code review process
• Installing and organising information systems to guarantee company
functionality
• Preparing accurate documentation and reports
• Migration of data from legacy systems to new solutions
• Stakeholders’ analysis of our current clients, company operations and
applications, and programming requirements
• Collaborate with functional teams across the business to deliver end-to-end
products, features enabling enhanced performance
Required Qualifications
We are looking for individuals who are curious, excited about learning, and navigating
through the uncertainties and complexities that are associated with a growing
company. Some qualifications that we think would help you thrive in this role are:
• Minimum 8 Years of experience as a Database Administrator
• Strong knowledge of data structures and database systems
• In depth expertise and hands on experience with MySQL/MariaDB Database
Management System
• In depth expertise and hands on experience in database design, data
maintenance, database security, data analysis and mining
• Hands-on experience with at least one web-hosting platform such as Microsoft
Azure, AWS (Amazon Web Services) etc.
• Strong understanding of security principles and how they apply to web
applications
• Basic knowledge of networking, Desirable knowledge of business intelligence
• Desirable knowledge of data architectures related to data warehouse
implementations
• Strong interpersonal skills and a desire to work collaboratively to achieve
objectives
• Understanding of Agile methodologies
• Bachelor/Masters of CS/IT Engineering, BCA/MCA, B Sc/M Sc in CS/IT
Preferred Qualifications
• Sense of ownership and pride in your performance and its impact on company’s
success
• Critical thinker, Team player
• Excellent trouble-shooting and problem-solving skills
• Excellent analytical and Strong organisational skills
• Good time-management skills
• Great interpersonal and communication skills
1. Communicate with the clients and understand their business requirements.
2. Build, train, and manage your own team of junior data engineers.
3. Assemble large, complex data sets that meet the client’s business requirements.
4. Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, including the cloud.
6. Assist clients with data-related technical issues and support their data infrastructure requirements.
7. Work with data scientists and analytics experts to strive for greater functionality.
Skills required: (experience with at least most of these)
1. Experience with Big Data tools-Hadoop, Spark, Apache Beam, Kafka etc.
2. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
3. Experience in ETL and Data Warehousing.
4. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra etc.
5. Experience with cloud platforms like AWS, GCP and Azure.
6. Experience with workflow management using tools like Apache Airflow.
• Responsible for designing, deploying, and maintaining analytics environment that processes data at scale
• Contribute design, configuration, deployment, and documentation for components that manage data ingestion, real time streaming, batch processing, data extraction, transformation, enrichment, and loading of data into a variety of cloud data platforms, including AWS and Microsoft Azure
• Identify gaps and improve the existing platform to improve quality, robustness, maintainability, and speed
• Evaluate new and upcoming big data solutions and make recommendations for adoption to extend our platform to meet advanced analytics use cases, such as predictive modeling and recommendation engines
• Data Modelling , Data Warehousing on Cloud Scale using Cloud native solutions.
• Perform development, QA, and dev-ops roles as needed to ensure total end to end responsibility of solutions
COMPETENCIES
• Experience building, maintaining, and improving Data Models / Processing Pipeline / routing in large scale environments
• Fluency in common query languages, API development, data transformation, and integration of data streams
• Strong experience with large dataset platforms such as (e.g. Amazon EMR, Amazon Redshift, AWS Lambda & Fargate, Amazon Athena, Azure SQL Database, Azure Database for PostgreSQL, Azure Cosmos DB , Data Bricks)
• Fluency in multiple programming languages, such as Python, Shell Scripting, SQL, Java, or similar languages and tools appropriate for large scale data processing.
• Experience with acquiring data from varied sources such as: API, data queues, flat-file, remote databases
• Understanding of traditional Data Warehouse components (e.g. ETL, Business Intelligence Tools)
• Creativity to go beyond current tools to deliver the best solution to the problem
at Velocity Services
We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.
We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.
Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!
Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc.
Key Responsibilities
-
Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality
-
Work with the Office of the CTO as an active member of our architecture guild
-
Writing pipelines to consume the data from multiple sources
-
Writing a data transformation layer using DBT to transform millions of data into data warehouses.
-
Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities
-
Identify downstream implications of data loads/migration (e.g., data quality, regulatory)
What To Bring
-
3+ years of software development experience, a startup experience is a plus.
-
Past experience of working with Airflow and DBT is preferred
-
2+ years of experience working in any backend programming language.
-
Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL
-
Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)
-
Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects
-
Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure
-
Basic understanding of Kubernetes & docker is a must.
-
Experience in data processing (ETL, ELT) and/or cloud-based platforms
-
Working proficiency and communication skills in verbal and written English.
UAE Client
- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
- Experience in migrating on-premise data warehouses to data platforms on AZURE cloud.
- Designing and implementing data engineering, ingestion, and transformation functions
-
Azure Synapse or Azure SQL data warehouse
-
Spark on Azure is available in HD insights and data bricks
A global business process management company
Designation – Deputy Manager - TS
Job Description
- Total of 8/9 years of development experience Data Engineering . B1/BII role
- Minimum of 4/5 years in AWS Data Integrations and should be very good on Data modelling skills.
- Should be very proficient in end to end AWS Data solution design, that not only includes strong data ingestion, integrations (both Data @ rest and Data in Motion) skills but also complete DevOps knowledge.
- Should have experience in delivering at least 4 Data Warehouse or Data Lake Solutions on AWS.
- Should be very strong experience on Glue, Lambda, Data Pipeline, Step functions, RDS, CloudFormation etc.
- Strong Python skill .
- Should be an expert in Cloud design principles, Performance tuning and cost modelling. AWS certifications will have an added advantage
- Should be a team player with Excellent communication and should be able to manage his work independently with minimal or no supervision.
- Life Science & Healthcare domain background will be a plus
Qualifications
BE/Btect/ME/MTech
We at Datametica Solutions Private Limited are looking for an SQL Lead / Architect who has a passion for the cloud with knowledge of different on-premises and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike.
Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.
Job Description :
Experience: 6+ Years
Work Location: Pune / Hyderabad
Technical Skills :
- Good programming experience as an Oracle PL/SQL, MySQL, and SQL Server Developer
- Knowledge of database performance tuning techniques
- Rich experience in a database development
- Experience in Designing and Implementation Business Applications using the Oracle Relational Database Management System
- Experience in developing complex database objects like Stored Procedures, Functions, Packages and Triggers using SQL and PL/SQL
Required Candidate Profile :
- Excellent communication, interpersonal, analytical skills and strong ability to drive teams
- Analyzes data requirements and data dictionary for moderate to complex projects • Leads data model related analysis discussions while collaborating with Application Development teams, Business Analysts, and Data Analysts during joint requirements analysis sessions
- Translate business requirements into technical specifications with an emphasis on highly available and scalable global solutions
- Stakeholder management and client engagement skills
- Strong communication skills (written and verbal)
About Us!
A global leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.
We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.
Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.
We have our own products!
Eagle Data warehouse Assessment & Migration Planning Product
Raven Automated Workload Conversion Product
Pelican Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.
Why join us!
Datametica is a place to innovate, bring new ideas to live, and learn new things. We believe in building a culture of innovation, growth, and belonging. Our people and their dedication over these years are the key factors in achieving our success.
Benefits we Provide!
Working with Highly Technical and Passionate, mission-driven people
Subsidized Meals & Snacks
Flexible Schedule
Approachable leadership
Access to various learning tools and programs
Pet Friendly
Certification Reimbursement Policy
Check out more about us on our website below!
www.datametica.com