Data ToBiz
www.datatobiz.comFounded :
2017
Type :
Services
Size :
20-100
Stage :
Bootstrapped
About
With vision comes the insight and with insight comes the faith. We deliver the precise information/insights for your eye to visualize the facts and take the required decision with faith when it comes to making a business move. Everything now is based on assurity, the assurity that one gets from the information that is driven via collected raw data. We at DataToBiz help frame and explore that raw data to bring forth the facts behind it. These facts can help you fuel your business to rise above all others.
Read more
Connect with the team
Jobs at Data ToBiz

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Chandigarh, NCR (Delhi | Gurgaon | Noida)
2 - 6 yrs
₹7L - ₹15L / yr
Datawarehousing
Amazon Redshift
Analytics
Python
Amazon Web Services (AWS)
+2 more
Job Responsibilities :
As a Data Warehouse Engineer in our team, you should have a proven ability to deliver high-quality work on time and with minimal supervision.
Develops or modifies procedures to solve complex database design problems, including performance, scalability, security and integration issues for various clients (on-site and off-site).
Design, develop, test, and support the data warehouse solution.
Adapt best practices and industry standards, ensuring top quality deliverable''s and playing an integral role in cross-functional system integration.
Design and implement formal data warehouse testing strategies and plans including unit testing, functional testing, integration testing, performance testing, and validation testing.
Evaluate all existing hardware's and software's according to required standards and ability to configure the hardware clusters as per the scale of data.
Data integration using enterprise development tool-sets (e.g. ETL, MDM, Quality, CDC, Data Masking, Quality).
Maintain and develop all logical and physical data models for enterprise data warehouse (EDW).
Contributes to the long-term vision of the enterprise data warehouse (EDW) by delivering Agile solutions.
Interact with end users/clients and translate business language into technical requirements.
Acts independently to expose and resolve problems.
Participate in data warehouse health monitoring and performance optimizations as well as quality documentation.
Job Requirements :
2+ years experience working in software development & data warehouse development for enterprise analytics.
2+ years of working with Python with major experience in Red-shift as a must and exposure to other warehousing tools.
Deep expertise in data warehousing, dimensional modeling and the ability to bring best practices with regard to data management, ETL, API integrations, and data governance.
Experience working with data retrieval and manipulation tools for various data sources like Relational (MySQL, PostgreSQL, Oracle), Cloud-based storage.
Experience with analytic and reporting tools (Tableau, Power BI, SSRS, SSAS). Experience in AWS cloud stack (S3, Glue, Red-shift, Lake Formation).
Experience in various DevOps practices helping the client to deploy and scale the systems as per requirement.
Strong verbal and written communication skills with other developers and business clients.
Knowledge of Logistics and/or Transportation Domain is a plus.
Ability to handle/ingest very huge data sets (both real-time data and batched data) in an efficient manner.
As a Data Warehouse Engineer in our team, you should have a proven ability to deliver high-quality work on time and with minimal supervision.
Develops or modifies procedures to solve complex database design problems, including performance, scalability, security and integration issues for various clients (on-site and off-site).
Design, develop, test, and support the data warehouse solution.
Adapt best practices and industry standards, ensuring top quality deliverable''s and playing an integral role in cross-functional system integration.
Design and implement formal data warehouse testing strategies and plans including unit testing, functional testing, integration testing, performance testing, and validation testing.
Evaluate all existing hardware's and software's according to required standards and ability to configure the hardware clusters as per the scale of data.
Data integration using enterprise development tool-sets (e.g. ETL, MDM, Quality, CDC, Data Masking, Quality).
Maintain and develop all logical and physical data models for enterprise data warehouse (EDW).
Contributes to the long-term vision of the enterprise data warehouse (EDW) by delivering Agile solutions.
Interact with end users/clients and translate business language into technical requirements.
Acts independently to expose and resolve problems.
Participate in data warehouse health monitoring and performance optimizations as well as quality documentation.
Job Requirements :
2+ years experience working in software development & data warehouse development for enterprise analytics.
2+ years of working with Python with major experience in Red-shift as a must and exposure to other warehousing tools.
Deep expertise in data warehousing, dimensional modeling and the ability to bring best practices with regard to data management, ETL, API integrations, and data governance.
Experience working with data retrieval and manipulation tools for various data sources like Relational (MySQL, PostgreSQL, Oracle), Cloud-based storage.
Experience with analytic and reporting tools (Tableau, Power BI, SSRS, SSAS). Experience in AWS cloud stack (S3, Glue, Red-shift, Lake Formation).
Experience in various DevOps practices helping the client to deploy and scale the systems as per requirement.
Strong verbal and written communication skills with other developers and business clients.
Knowledge of Logistics and/or Transportation Domain is a plus.
Ability to handle/ingest very huge data sets (both real-time data and batched data) in an efficient manner.
Read more

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Chandigarh, NCR (Delhi | Gurgaon | Noida)
2 - 6 yrs
₹7L - ₹15L / yr
ETL
Amazon Web Services (AWS)
Amazon Redshift
Python
Job Responsibilities : - Developing new data pipelines and ETL jobs for processing millions of records and it should be scalable with growth.
Pipelines should be optimised to handle both real time data, batch update data and historical data.
Establish scalable, efficient, automated processes for complex, large scale data analysis.
Write high quality code to gather and manage large data sets (both real time and batch data) from multiple sources, perform ETL and store it in a data warehouse.
Manipulate and analyse complex, high-volume, high-dimensional data from varying sources using a variety of tools and data analysis techniques.
Participate in data pipelines health monitoring and performance optimisations as well as quality documentation.
Interact with end users/clients and translate business language into technical requirements.
Acts independently to expose and resolve problems.
Job Requirements :-
2+ years experience working in software development & data pipeline development for enterprise analytics.
2+ years of working with Python with exposure to various warehousing tools
In-depth working with any of commercial tools like AWS Glue, Ta-lend, Informatica, Data-stage, etc.
Experience with various relational databases like MySQL, MSSql, Oracle etc. is a must.
Experience with analytics and reporting tools (Tableau, Power BI, SSRS, SSAS).
Experience in various DevOps practices helping the client to deploy and scale the systems as per requirement.
Strong verbal and written communication skills with other developers and business client.
Knowledge of Logistics and/or Transportation Domain is a plus.
Hands-on with traditional databases and ERP systems like Sybase and People-soft.
Pipelines should be optimised to handle both real time data, batch update data and historical data.
Establish scalable, efficient, automated processes for complex, large scale data analysis.
Write high quality code to gather and manage large data sets (both real time and batch data) from multiple sources, perform ETL and store it in a data warehouse.
Manipulate and analyse complex, high-volume, high-dimensional data from varying sources using a variety of tools and data analysis techniques.
Participate in data pipelines health monitoring and performance optimisations as well as quality documentation.
Interact with end users/clients and translate business language into technical requirements.
Acts independently to expose and resolve problems.
Job Requirements :-
2+ years experience working in software development & data pipeline development for enterprise analytics.
2+ years of working with Python with exposure to various warehousing tools
In-depth working with any of commercial tools like AWS Glue, Ta-lend, Informatica, Data-stage, etc.
Experience with various relational databases like MySQL, MSSql, Oracle etc. is a must.
Experience with analytics and reporting tools (Tableau, Power BI, SSRS, SSAS).
Experience in various DevOps practices helping the client to deploy and scale the systems as per requirement.
Strong verbal and written communication skills with other developers and business client.
Knowledge of Logistics and/or Transportation Domain is a plus.
Hands-on with traditional databases and ERP systems like Sybase and People-soft.
Read more

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Chandigarh
2 - 5 yrs
₹4L - ₹6L / yr
Algorithms
ETL
Python
Machine Learning (ML)
Deep Learning
+3 more
Job Summary
DataToBiz is an AI and Data Analytics Services startup. We are a team of young and dynamic professionals looking for an exceptional data scientist to join our team in Chandigarh. We are trying to solve some very exciting business challenges by applying cutting-edge Machine Learning and Deep Learning Technology.
Being a consulting and services startup we are looking for quick learners who can work in a cross-functional team of Consultants, SMEs from various domains, UX architects, and Application development experts, to deliver compelling solutions through the application of Data Science and Machine Learning. The desired candidate will have a passion for finding patterns in large datasets, an ability to quickly understand the underlying domain and expertise to apply Machine Learning tools and techniques to create insights from the data.
Responsibilities and Duties
As a Data Scientist on our team, you will be responsible for solving complex big-data problems for various clients (on-site and off-site) using data mining, statistical analysis, machine learning, deep learning.
One of the primary responsibilities will be to understand the business need and translate it into an actionable analytical plan in consultation with the team.
Ensure that the analytical plan aligns with the customer’s overall strategic need.
Understand and identify appropriate data sources required for solving the business problem at hand.
Explore, diagnose and resolve any data discrepancies – including but not limited to any ETL that may be required, missing value and extreme value/outlier treatment using appropriate methods.
Execute project plan to meet requirements and timelines.
Identify success metrics and monitor them to ensure high-quality output for the client.
Deliver production-ready models that can be deployed in the production system.
Create relevant output documents, as required – power point deck/ excel files, data frames etc.
Overall project management - Creating a project plan and timelines for the project and obtain sign-off.
Monitor project progress in conjunction with the project plan – report risks, scope creep etc. in a timely manner.
Identify and evangelize new and upcoming analytical trends in the market within the organization.
Implementing the applications of these algorithms/methods/techniques in R/Python
Required Experience, Skills and Qualifications
3+ years experience working Data Mining and Statistical Modeling for predictive and prescriptive enterprise analytics.
2+ years of working with Python, Machine learning with exposure to one or more ML/DL frameworks like Tensorflow, Caffe, Scikit-Learn, MXNet, CNTK.
Exposure to ML techniques and algorithms to work with different data formats including Structured Data, Unstructured Data, and Natural Language.
Experience working with data retrieval and manipulation tools for various data sources like: Rest/Soap APIs, Relational (MySQL) and No-SQL Databases (MongoDB), IOT data streams, Cloud-based storage, and HDFS.
Strong foundation in Algorithms and Data Science theory.
Strong verbal and written communication skills with other developers and business client
Knowledge of Telecom and/or FinTech Domain is a plus.
Read more
Did not find a job you were looking for?

Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Similar companies
Founded
2012
Type
Products & Services
Size
100-1000
Stage
Raised funding
About the company
BizKonnect is a global actionable sales intelligence solution company. We help you reach the decision maker who has pain points you are solving and take you there with your known business connections. We enable you to execute Account Based Marketing at scale.We provide actionable sales intelligence and help you with the technology users lists, decision makers contact list. Our clients use the contact list to reach to their decision makers. We provide deep sales intelligence like market analysis, heat map, organizational chart, technology map, connection map which help you in the Account Based Marketing. Our theme based personalized campaigning solution is very effective in giving qualified meetings with the decision makers. The assisted service model makes it a virtual sales assistant for your sales executives. With our own company database of 20 Million companies, we are trusted data and campaign partners of global sales and marketing teams.
Jobs
4
Founded
2015
Type
Products & Services
Size
20-100
Stage
Profitable
About the company
Advanced analytics, Data Management Solutions & Services Start-Up
Jobs
4
Founded
1975
Type
Products & Services
Size
100-1000
Stage
Profitable
About the company
Datamatics provides IT Products & Solutions, Consulting and BPO services. It helps in client's digital journey by leveraging technology.
Jobs
3
Founded
2012
Type
Services
Size
100-1000
Stage
Profitable
About the company
Tiger Analytics is an advanced analytics and AI consulting company enabling enterprises to generate business value through data. We are the trusted data sciences and data engineering partner for several Fortune 500 firms. We bring expertise in marketing science, customer analytics, and operations & planning analytics.
Jobs
4
Founded
2014
Type
Products & Services
Size
100-1000
Stage
Profitable
About the company
Data Services | DataPure
Jobs
5
Founded
2014
Type
Services
Size
0-20
Stage
Raised funding
About the company
The ultimate solution for Business and Design
Jobs
1
Founded
2017
Type
Services
Size
0-20
Stage
Profitable
About the company
Truebiz Learning is Trusted Advisor for Corporates to Elevate Employee Performance in Sales, Customer Delight, Innovative Training Content and Managed Learning.
Jobs
1
Founded
2015
Type
Services
Size
100-1000
Stage
Profitable
About the company
ZiMetrics - Big Data IoT Business Intelligence (BI) Services and Solutions For high-tech and IT Solution on
Machine Learning BigData Science Analytics Services approach with a global delivery model to produce high ROI and quality for
customers.
Jobs
1
Founded
2020
Type
Services
Size
100-1000
Stage
Profitable
About the company
The DataTech Labs specializes in providing enterprise platform solutions to businesses across various industries. Established nearly eighteen years ago by CEO Dr. Amit Andre, the company focuses on digital transformation and developing the latest technology to enhance efficiency and excellence through digital learning programs and solutions. The company delivers cloud, on-premises, and hybrid solutions to industries ranging from public sector to healthcare.
Jobs
1
Want to work at Data ToBiz?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs