Key responsibility is to design, develop & maintain efficient Data models for the organization maintained to ensure optimal query performance by the consumption layer.
Developing, Deploying & maintaining a repository of UDXs written in Java / Python.
Develop optimal Data Model design, analyzing complex distributed data deployments, and making recommendations to optimize performance basis data consumption patterns, performance expectations, the query is executed on the tables/databases, etc.
Periodic Database health check and maintenance
Designing collections in a no-SQL Database for efficient performance
Document & maintain data dictionary from various sources to enable data governance
Coordination with Business teams, IT, and other stakeholders to provide best-in-class data pipeline solutions, exposing data via APIs, loading in down streams, No-SQL Databases, etc
Data Governance Process Implementation and ensuring data security
Requirements
Extensive working experience in Designing & Implementing Data models in OLAP Data Warehousing solutions (Redshift, Synapse, Snowflake, Teradata, Vertica, etc).
Programming experience using Python / Java.
Working knowledge in developing & deploying User-defined Functions (UDXs) using Java / Python.
Strong understanding & extensive working experience in OLAP Data Warehousing (Redshift, Synapse, Snowflake, Teradata, Vertica, etc) architecture and cloud-native Data Lake (S3, ADLS, BigQuery, etc) Architecture.
Strong knowledge in Design, Development & Performance tuning of 3NF/Flat/Hybrid Data Model.
Extensive technical experience in SQL including code optimization techniques.
Strung knowledge of database performance and tuning, troubleshooting, and tuning.
Knowledge of collection design in any No-SQL DB (DynamoDB, MongoDB, CosmosDB, etc), along with implementation of best practices.
Ability to understand business functionality, processes, and flows.
Good combination of technical and interpersonal skills with strong written and verbal communication; detail-oriented with the ability to work independently.
Any OLAP DWH DBA Experience and User Management will be added advantage.
Knowledge in financial industry-specific Data models such as FSLDM, IBM Financial Data Model, etc will be added advantage.
Experience in Snowflake will be added advantage.
Working experience in BFSI/NBFC & data understanding of Loan/Mortgage data will be added advantage.
Functional knowledge
Data Governance & Quality Assurance
Modern OLAP Database Architecture & Design
Linux
Data structures, algorithm & data modeling techniques
No-SQL database architecture
Data Security
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Palak Talwar
Visual Designer, Neysa Networks
It’s important to me that I am able to contribute in a way that is valuable for me as well as the company. When I came across Cutshort, I was able to find a role in a completely new industry and trustworthy people to work with.
Subodh Popalwar
Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Prachi Kodle
Senior Software Engineer (UI), DeepIntent
It was important I find work with a good tech stack at a company with good work culture and good pay. The jobs that Cutshort recommended to me felt like they were made for me!
Palak Talwar
Visual Designer, Neysa Networks
It’s important to me that I am able to contribute in a way that is valuable for me as well as the company. When I came across Cutshort, I was able to find a role in a completely new industry and trustworthy people to work with.
Subodh Popalwar
Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Prachi Kodle
Senior Software Engineer (UI), DeepIntent
It was important I find work with a good tech stack at a company with good work culture and good pay. The jobs that Cutshort recommended to me felt like they were made for me!
Palak Talwar
Visual Designer, Neysa Networks
It’s important to me that I am able to contribute in a way that is valuable for me as well as the company. When I came across Cutshort, I was able to find a role in a completely new industry and trustworthy people to work with.
Companies hiring on Cutshort
About They provide both wholesale and retail funding. (PM1)
Work on execution and scheduling of all tasks related to assigned projects' deliverable dates
Optimize and debug existing codes to make them scalable and improve performance
Design, development, and delivery of tested code and machine learning models into production environments
Work effectively in teams, managing and leading teams
Provide effective, constructive feedback to the delivery leader
Manage client expectations and work with an agile mindset with machine learning and AI technology
Design and prototype data-driven solutions
Eligibility
Highly experienced in designing, building, and shipping scalable and production-quality machine learning algorithms in the field of Python applications
Working knowledge and experience in NLP core components (NER, Entity Disambiguation, etc.)
In-depth expertise in Data Munging and Storage (Experienced in SQL, NoSQL, MongoDB, Graph Databases)
Expertise in writing scalable APIs for machine learning models
Experience with maintaining code logs, task schedulers, and security
Working knowledge of machine learning techniques, feed-forward, recurrent and convolutional neural networks, entropy models, supervised and unsupervised learning
Experience with at least one of the following: Keras, Tensorflow, Caffe, or PyTorch
3+ years of Experience majoring in applying AI/ML/ NLP / deep learning / data-driven statistical analysis & modelling solutions.
Programming skills in Python, knowledge in Statistics.
Hands-on experience developing supervised and unsupervised machine learning algorithms (regression, decision trees/random forest, neural networks, feature selection/reduction, clustering, parameter tuning, etc.). Familiarity with reinforcement learning is highly desirable.
Experience in the financial domain and familiarity with financial models are highly desirable.
Experience in image processing and computer vision.
Experience working with building data pipelines.
Good understanding of Data preparation, Model planning, Model training, Model validation, Model deployment and performance tuning.
Should have hands on experience with some of these methods: Regression, Decision Trees,CART, Random Forest, Boosting, Evolutionary Programming, Neural Networks, Support Vector Machines, Ensemble Methods, Association Rules, Principal Component Analysis, Clustering, ArtificiAl Intelligence
Should have experience in using larger data sets using Postgres Database.
Company Profile XpressBees – a logistics company started in 2015 – is amongst the fastest growing companies of its sector. While we started off rather humbly in the space of ecommerce B2C logistics, the last 5 years have seen us steadily progress towards expanding our presence. Our vision to evolve into a strong full-service logistics organization reflects itself in our new lines of business like 3PL, B2B Xpress and cross border operations. Our strong domain expertise and constant focus on meaningful innovation have helped us rapidly evolve as the most trusted logistics partner of India. We have progressively carved our way towards best-in-class technology platforms, an extensive network reach, and a seamless last mile management system. While on this aggressive growth path, we seek to become the one-stop-shop for end-to-end logistics solutions. Our big focus areas for the very near future include strengthening our presence as service providers of choice and leveraging the power of technology to improve efficiencies for our clients.
Job Profile As a Lead Data Engineer in the Data Platform Team at XpressBees, you will build the data platform and infrastructure to support high quality and agile decision-making in our supply chain and logistics workflows. You will define the way we collect and operationalize data (structured / unstructured), and build production pipelines for our machine learning models, and (RT, NRT, Batch) reporting & dashboarding requirements. As a Senior Data Engineer in the XB Data Platform Team, you will use your experience with modern cloud and data frameworks to build products (with storage and serving systems) that drive optimisation and resilience in the supply chain via data visibility, intelligent decision making, insights, anomaly detection and prediction.
What You Will Do • Design and develop data platform and data pipelines for reporting, dashboarding and machine learning models. These pipelines would productionize machine learning models and integrate with agent review tools. • Meet the data completeness, correction and freshness requirements. • Evaluate and identify the data store and data streaming technology choices. • Lead the design of the logical model and implement the physical model to support business needs. Come up with logical and physical database design across platforms (MPP, MR, Hive/PIG) which are optimal physical designs for different use cases (structured/semi structured). Envision & implement the optimal data modelling, physical design, performance optimization technique/approach required for the problem. • Support your colleagues by reviewing code and designs. • Diagnose and solve issues in our existing data pipelines and envision and build their successors.
Qualifications & Experience relevant for the role
• A bachelor's degree in Computer Science or related field with 6 to 9 years of technology experience. • Knowledge of Relational and NoSQL data stores, stream processing and micro-batching to make technology & design choices. • Strong experience in System Integration, Application Development, ETL, Data-Platform projects. Talented across technologies used in the enterprise space. • Software development experience using: • Expertise in relational and dimensional modelling • Exposure across all the SDLC process • Experience in cloud architecture (AWS) • Proven track record in keeping existing technical skills and developing new ones, so that you can make strong contributions to deep architecture discussions around systems and applications in the cloud ( AWS).
• Characteristics of a forward thinker and self-starter that flourishes with new challenges and adapts quickly to learning new knowledge • Ability to work with a cross functional teams of consulting professionals across multiple projects. • Knack for helping an organization to understand application architectures and integration approaches, to architect advanced cloud-based solutions, and to help launch the build-out of those systems • Passion for educating, training, designing, and building end-to-end systems.
Should have good experience with Python or Scala/PySpark/Spark/ • Experience with Advanced SQL • Experience with Azure data factory, data bricks, • Experience with Azure IOT, Cosmos DB, BLOB Storage • API management, FHIR API development, • Proficient with Git and CI/CD best practices • Experience working with Snowflake is a plus
About Company Helical Insight an open source Business Intelligence tool from Helical IT Solutions Pvt. Ltd, based out of Hyderabad, is looking for fresher’s having strong knowledge on SQL. Helical Insight has more than 50+ clients from various sectors. It has been awarded the most promising company in the Business Intelligence space. We are looking for rockstar team mate to join our company. Job Brief We are looking for a Business Intelligence (BI) Developer to create and manage BI and analytics solutions that turn data into knowledge. In this role, you should have a background in data and business analysis. You should be analytical and an excellent communicator. If you also have a business acumen and problemsolving aptitude, we’d like to meet you. Excellent knowledge on SQLQuery is required. Basic knowledge on HTML CSS and JS is required. You would be working closely with customers of various domain to understand their data, understand their business requirement and deliver the required analytics in form of varous reports dashboards etc. Excellent client interfacing role with opportunity to work across various sectors and geographies as well as varioud kind of DB including NoSQL, RDBMS, graph db, Columnar DB etc Skill set and Qualification required Responsibilities Attending client calls to get requriement, show progress Translate business needs to technical specifications Design, build and deploy BI solutions (e.g. reporting tools) Maintain and support data analytics platforms) Conduct unit testing and troubleshooting Evaluate and improve existing BI systems Collaborate with teams to integrate systems Develop and execute database queries and conduct analyses Create visualizations and reports for requested projects Develop and update technical documentation Requirements Excellent expertise on SQLQueries Proven experience as a BI Developer or Data Scientist Background in data warehouse design (e.g. dimensional modeling) and data mining In-depth understanding of database management systems, online analytical processing (OLAP) and ETL (Extract, transform, load) framework Familiarity with BI technologies Proven abilities to take initiative and be innovative Analytical mind with a problem-solving aptitude BE in Computer Science/IT Education: BE/ BTech/ MCA/BCA/ MTech/ MS, or equivalent preferred. Interested candidates call us on +91 7569 765 162
o Critical thinking mind who likes to solve complex problems, loves programming, and cherishes to work in a fast-paced environment. o Strong Python development skills, with 7+ yrs. experience with SQL. o A bachelor or master’s degree in Computer Science or related areas o 5+ years of experience in data integration and pipeline development o Experience in Implementing Databricks Delta lake and data lake o Expertise designing and implementing data pipelines using modern data engineering approach and tools: SQL, Python, Delta Lake, Databricks, Snowflake Spark o Experience in working with multiple file formats (Parque, Avro, Delta Lake) & API o experience with AWS Cloud on data integration with S3. o Hands on Development experience with Python and/or Scala. o Experience with SQL and NoSQL databases. o Experience in using data modeling techniques and tools (focused on Dimensional design) o Experience with micro-service architecture using Docker and Kubernetes o Have experience working with one or more of the public cloud providers i.e. AWS, Azure or GCP o Experience in effectively presenting and summarizing complex data to diverse audiences through visualizations and other means o Excellent verbal and written communications skills and strong leadership capabilities
Skills: ML MOdelling Python SQL Azure Data Lake, dataFactory, Databricks, Delta Lake
work in handling large-scale data engineering pipelines. Excellent verbal and written communication skills. Proficient in PowerPoint or other presentation tools. Ability to work quickly and accurately on multiple projects.
• Drive the data engineering implementation • Strong experience in building data pipelines • AWS stack experience is must • Deliver Conceptual, Logical and Physical data models for the implementation teams.
• SQL stronghold is must. Advanced SQL working knowledge and experience working with a variety of relational databases, SQL query authoring • AWS Cloud data pipeline experience is must. Data pipelines and data centric applications using distributed storage platforms like S3 and distributed processing platforms like Spark, Airflow, Kafka • Working knowledge of AWS technologies such as S3, EC2, EMR, RDS, Lambda, Elasticsearch • Ability to use a major programming (e.g. Python /Java) to process data for modelling.
We are looking for a ML Architect to help us discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver even better products. Your primary focus will be in applying data mining techniques, doing statistical analysis, and building high quality prediction systems integrated with our products. They must have strong experience using variety of data mining and data analysis methods, building and implementing models, using/creating algorithm’s and creating/running simulations. They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes. Automating to identify the textual data with their properties and structure form various type of document.
Responsibilities
Selecting features, building and optimizing classifiers using machine learning techniques
Data mining using state-of-the-art methods
Enhancing data collection procedures to include information that is relevant for building analytic systems
Processing, cleansing, and verifying the integrity of data used for analysis
Creating automated anomaly detection systems and constant tracking of its performance
Assemble large, complex data sets that meet functional / non-functional business requirements.
Secure and manage when needed GPU cluster resources for events
Write comprehensive internal feedback reports and find opportunities for improvements
Manage GPU instances/machines to increase the performance and efficiency of the ML/DL model.
Skills and Qualifications
Strong Hands-on experience in Python Programming
Working experience with Computer Vision models - Object Detection Model, Image Classification
Good experience in feature extraction, feature selection techniques and transfer learning
Working Experience in building deep learning NLP Models for text classification, image analytics-CNN,RNN,LSTM.
Working Experience in any of the AWS/GCP cloud platforms, exposure in fetching data from various sources.
Good experience in exploratory data analysis, data visualisation, and other data preprocessing techniques.
Knowledge in any one of the DL frameworks like Tensorflow, Pytorch, Keras, Caffe
Good knowledge in statistics,distribution of data and in supervised and unsupervised machine learning algorithms.
Exposure to OpenCV Familiarity with GPUs + CUDA
Experience with NVIDIA software for cluster management and provisioning such as nvsm, dcgm and DeepOps.
We are looking for a candidate with 14+ years of experience, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
Experience with big data tools: Hadoop, Spark, Kafka, etc.
Experience with AWS cloud services: EC2, RDS, AWS-Sagemaker(Added advantage)
Experience with object-oriented/object function scripting languages in any: Python, Java, C++, Scala, etc.
Read more
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
Read about what our users have to say about finding their next opportunity on Cutshort.
Palak Talwar
Visual Designer, Neysa Networks
It’s important to me that I am able to contribute in a way that is valuable for me as well as the company. When I came across Cutshort, I was able to find a role in a completely new industry and trustworthy people to work with.
Subodh Popalwar
Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Prachi Kodle
Senior Software Engineer (UI), DeepIntent
It was important I find work with a good tech stack at a company with good work culture and good pay. The jobs that Cutshort recommended to me felt like they were made for me!
Palak Talwar
Visual Designer, Neysa Networks
It’s important to me that I am able to contribute in a way that is valuable for me as well as the company. When I came across Cutshort, I was able to find a role in a completely new industry and trustworthy people to work with.
Subodh Popalwar
Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Prachi Kodle
Senior Software Engineer (UI), DeepIntent
It was important I find work with a good tech stack at a company with good work culture and good pay. The jobs that Cutshort recommended to me felt like they were made for me!
Palak Talwar
Visual Designer, Neysa Networks
It’s important to me that I am able to contribute in a way that is valuable for me as well as the company. When I came across Cutshort, I was able to find a role in a completely new industry and trustworthy people to work with.