Cutshort logo
Nyteco logo
Data Analyst
Data Analyst
Nyteco's logo

Data Analyst

Alokha Raj's profile picture
Posted by Alokha Raj
2 - 3 yrs
₹4L - ₹6L / yr
Remote only
Skills
skill iconData Analytics
Data Visualization
PowerBI
Tableau
Qlikview
Spotfire
skill iconPython
skill iconHTML/CSS
MySQL
Data entry
MS-Excel
Data management

About Davis Index


Davis Index is a market intelligence platform and publication that provides price benchmarks for recycled materials and primary metals.

Our team of dedicated reporters, analysts, and data specialists publish and process over 1,400 proprietary price indexes, metals futures prices, and other reference data including market intelligence, news, and analysis through an industry-leading technology platform.


About the role 

Here at Davis Index, we look to bring true, accurate market insights, news and data to the recycling industry. This enables sellers and buyers to boost their margins, and access daily market intelligence, data analytics, and news.


We’re looking for a keen data expert who will take on a high-impact role that focuses on end-to-end data management, BI and analysis tasks within a specific functional area or data type. If taking on challenges in building, extracting, refining and very importantly automating data processes is something you enjoy doing, apply to us now!


Key: Data integration, data migration, data warehouse automation, data synchronization, automated data extraction, or other data management projects.


What you will do in this role

  • Build and maintain data pipelines from internal databases.
  • Data mapping of data elements between source and target systems.
  • Create data documentation including mappings and quality thresholds.
  • Build and maintain analytical SQL/MongoDB queries, scripts.
  • Build and maintain Python scripts for data analysis/cleaning/structuring.
  • Build and maintain visualizations; delivering voluminous information in comprehensible forms or in ways that make it simple to recognise patterns, trends, and correlations.
  • Identify and develop data quality initiatives and opportunities for automation.
  • Investigate, track, and report data issues.
  • Undertake production data management functions as assigned/required.
  • Utilize various data workflow management and analysis tools.
  • Ability and desire to learn new processes, tools, and technologies.
  • Understanding fundamental AI and ML concepts.


Must have experience and qualifications

  • Bachelor's degree in Computer Science, Engineering, or Data related field required.
  • 2+ years’ experience in data management.
  • Advanced proficiency with Microsoft Excel and VBA/ Google sheets and AppScript
  • Proficiency with MongoDB/SQL.
  • Familiarity with Python for data manipulation and process automation preferred.
  • Proficiency with various data types and formats including, but not limited to JSON.
  • Intermediate proficiency with HTML/CSS.
  • Strong background in data analysis, data reporting, and data management coupled with the adept process mapping and improvements.
  • Strong research skills.
  • Attention to detail.


What you can expect

Work closely with a global team helping bring market intelligence to the recycling world. As a part of the Davis Index team we look to foster relationships and help you grow with us. You can also expect:

  • Work with leading minds from the recycling industry and be part of a growing, energetic global team
  • Exposure to developments and tools within your field ensures evolution in your career and skill building along with competitive compensation.
  • Health insurance coverage, paid vacation days and flexible work hours helping you maintain a work-life balance
  • Have the opportunity to network and collaborate in a diverse community


Apply Directly using this link : https://nyteco.keka.com/careers/jobdetails/54122


Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Nyteco

Founded :
2021
Type :
Products & Services
Size :
20-100
Stage :
Raised funding
About

Nyteco Inc is a green tech venture for the recycled materials industry and manufacturing supply chain.


We serve the industry through our flagship company - Jules AI.

Read more
Tech stack
skill iconReact.js
skill iconReact Native
GraphQL
skill iconPostgreSQL
TypeScript
JS
skill iconC++
skill iconJava
Company video
Nyteco's video section
Nyteco's video section
Candid answers by the company
What does the company do?
What is the location preference of jobs?

Nyteco aims to bring leading tech solutions to the recycling industry to help grow its trading business, connect with one another and much more!

Product showcase
Jules AI's logo
Jules AI
Visit
A lead to invoice, solution for recycled materials traders. Jules AI makes it easy to capture, notify, execute and track deals anywhere and anytime.
Read more
Connect with the team
Profile picture
Jean-Phillipe Boul
Profile picture
Sean Davidson
Profile picture
Reda Boumadhi
Profile picture
Arnaud Boucheron
Profile picture
Krit Iyer
Company social profiles
linkedin

Similar jobs

Strategic Toolkit for Capital Productivity
Strategic Toolkit for Capital Productivity
Agency job
via Qrata by Rayal Rajan
Remote only
5 - 10 yrs
₹12L - ₹45L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
skill iconPython
TensorFlow
+1 more
What would make you a good fit?

o You’re both relentless and kind, and don’t see these as being mutually
exclusive
o You have a self-directed learning style, an insatiable curiosity, and a
hands-on execution mindset
o You have deep experience working with product and engineering teams
to launch machine learning products that users love in new or rapidly
evolving markets
o You flourish in uncertain environments and can turn incomplete,
conflicting, or ambiguous inputs into solid data-science action plans
o You bring best practices to feature engineering, model development, and
ML operations
o Your experience in deploying and monitoring the performance of models
in production enables us to implement a best-in-class solution
o You have exceptional writing and speaking skills with a talent for
articulating how data science can be applied to solve customer problems

Must-Have Qualifications

o Graduate degree in engineering, data science, mathematics, physics, or
another quantitative field
o 5+ years of hands-on experience in building and deploying production-
grade ML models with ML frameworks (TensorFlow, Keras, PyTorch) and
libraries like scikit-learn
o Track-record in building ML pipelines for time series, classification, and
predictive applications
o Expert level skills in Python for data analysis and visualization, hypothesis
testing, and model building
o Deep experience with ensemble ML approaches including random forests
and xgboost, and experience with databases and querying models for
structured and unstructured data
o A knack for using data visualization and analysis tools to tell a story

o You naturally think quantitatively about problems and work backward
from a customer outcome

What’ll make you stand out (but not required)

o You have a keen awareness or interest in network analysis/graph analysis
or NLP
o You have experience in distributed systems and graph databases
o You have a strong connection to finance teams or closely related
domains, the challenges they face, and a deep appreciation for their
aspirations
Read more
With Reputed service based company
With Reputed service based company
Agency job
via Jobdost by Saida Jabbar
Bengaluru (Bangalore)
4 - 6 yrs
₹12L - ₹15L / yr
SQL
MySQL
MySQL DBA
MariaDB
MS SQLServer
Role Description
As a Database Administrator, you will be responsible for designing, testing, planning,
implementing, protecting, operating, managing and maintaining our company’s
databases. The goal is to provide a seamless flow of information throughout the

company, considering both backend data structure and frontend accessibility for end-
users. You get to work with some of the best minds in the industry at a place where

opportunity lurks everywhere and in everything.
Responsibilities
Your responsibilities are as follows.
• Build database systems of high availability and quality depending on each end
user’s specialised role
• Design and implement database in accordance to end users’ information needs
and views
• Define users and enable data distribution to the right user, in appropriate format
and in a timely manner
• Use high-speed transaction recovery techniques and backup data
• Minimise database downtime and manage parameters to provide fast query
responses
• Provide proactive and reactive data management support and training to users
• Determine, enforce and document database policies, procedures and
standards
• Perform tests and evaluations regularly to ensure data security, privacy and
integrity
• Monitor database performance, implement changes and apply new patches
and versions when required
Required Qualifications
We are looking for individuals who are curious, excited about learning, and navigating
through the uncertainties and complexities that are associated with a growing
company. Some qualifications that we think would help you thrive in this role are:
• Minimum 4 Years of experience as a Database Administrator
• Hands-on experience with database standards and end user applications
• Excellent knowledge of data backup, recovery, security, integrity and SQL
• Familiarity with database design, documentation and coding
• Previous experience with DBA case tools (frontend/backend) and third-party
tools
• Familiarity with programming languages API
• Problem solving skills and ability to think algorithmically
• Bachelor/Masters of CS/IT Engineering, BCA/MCA, B Sc/M Sc in CS/IT

Preferred Qualifications
• Sense of ownership and pride in your performance and its impact on company’s
success
• Critical thinker and problem-solving skills
• Team player
• Good time-management skills
• Great interpersonal and communication skills.
Read more
Broadcast Media Production and Distribution Company
Broadcast Media Production and Distribution Company
Agency job
via Qrata by Prajakta Kulkarni
Mumbai
3 - 8 yrs
₹7L - ₹10L / yr
skill iconPython
Object Oriented Programming (OOPs)
ETL
PowerBI
Tableau
+1 more
Professional Skillset:
 Professional experience in Python – Mandatory experience
 Basic knowledge of any BI Tool (Microsoft Power BI, Tableau etc.) and experience in R
will be an added advantage
 Proficient in Excel
 Good verbal and written communication skills


Key Responsibilities:
 Analyze data trends and provide intelligent business insights, monitor operational and
business metrics
 Complete ownership of business excellence dashboard and preparation of reports for
senior management stating trends, patterns, and predictions using relevant data
 Review, validate and analyse data points and implement new data analysis
methodologies
 Perform data profiling to identify and understand anomalies
 Perform analysis to assess quality and meaning of data
 Develop policies and procedures for the collection and analysis of data
 Analyse existing process with the help of data and propose process change and/or lead
process re-engineering initiatives
 Use BI Tools (Microsoft Power BI/Tableau) and develop and manage BI solutions
Read more
Leading Manufacturing Company
Leading Manufacturing Company
Agency job
Chennai
3 - 6 yrs
₹3L - ₹8L / yr
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Data modeling
skill iconData Analytics
+2 more

Location:  Chennai
Education: BE/BTech
Experience: Minimum 3+ years of experience as a Data Scientist/Data Engineer

Domain knowledge: Data cleaning, modelling, analytics, statistics, machine learning, AI

Requirements:

  • To be part of Digital Manufacturing and Industrie 4.0 projects across client group of companies
  • Design and develop AI//ML models to be deployed across factories
  • Knowledge on Hadoop, Apache Spark, MapReduce, Scala, Python programming, SQL and NoSQL databases is required
  • Should be strong in statistics, data analysis, data modelling, machine learning techniques and Neural Networks
  • Prior experience in developing AI and ML models is required
  • Experience with data from the Manufacturing Industry would be a plus

Roles and Responsibilities:

  • Develop AI and ML models for the Manufacturing Industry with a focus on Energy, Asset Performance Optimization and Logistics
  • Multitasking, good communication necessary
  • Entrepreneurial attitude

Additional Information:

  • Travel:                                  Must be willing to travel on shorter duration within India and abroad
  • Job Location:                      Chennai
  • Reporting to:                      Team Leader, Energy Management System
Read more
JK Technosoft Ltd
Nishu Gupta
Posted by Nishu Gupta
Bengaluru (Bangalore)
3 - 5 yrs
₹5L - ₹15L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+13 more

Roles and Responsibilities:

  • Design, develop, and maintain the end-to-end MLOps infrastructure from the ground up, leveraging open-source systems across the entire MLOps landscape.
  • Creating pipelines for data ingestion, data transformation, building, testing, and deploying machine learning models, as well as monitoring and maintaining the performance of these models in production.
  • Managing the MLOps stack, including version control systems, continuous integration and deployment tools, containerization, orchestration, and monitoring systems.
  • Ensure that the MLOps stack is scalable, reliable, and secure.

Skills Required:

  • 3-6 years of MLOps experience
  • Preferably worked in the startup ecosystem

Primary Skills:

  • Experience with E2E MLOps systems like ClearML, Kubeflow, MLFlow etc.
  • Technical expertise in MLOps: Should have a deep understanding of the MLOps landscape and be able to leverage open-source systems to build scalable, reliable, and secure MLOps infrastructure.
  • Programming skills: Proficient in at least one programming language, such as Python, and have experience with data science libraries, such as TensorFlow, PyTorch, or Scikit-learn.
  • DevOps experience: Should have experience with DevOps tools and practices, such as Git, Docker, Kubernetes, and Jenkins.

Secondary Skills:

  • Version Control Systems (VCS) tools like Git and Subversion
  • Containerization technologies like Docker and Kubernetes
  • Cloud Platforms like AWS, Azure, and Google Cloud Platform
  • Data Preparation and Management tools like Apache Spark, Apache Hadoop, and SQL databases like PostgreSQL and MySQL
  • Machine Learning Frameworks like TensorFlow, PyTorch, and Scikit-learn
  • Monitoring and Logging tools like Prometheus, Grafana, and Elasticsearch
  • Continuous Integration and Continuous Deployment (CI/CD) tools like Jenkins, GitLab CI, and CircleCI
  • Explain ability and Interpretability tools like LIME and SHAP


Read more
Enterprise Artificial Intelligence
Enterprise Artificial Intelligence
Agency job
via Purple Hirez by Aditya K
Hyderabad
5 - 12 yrs
₹10L - ₹35L / yr
Analytics
skill iconKubernetes
Apache Kafka
skill iconData Analytics
skill iconPython
+3 more
  • 3+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd and streaming databases like druid
  • Strong industry expertise with containerization technologies including kubernetes, docker-compose
  • 2+ years of industry in experience in developing scalable data ingestion processes and ETLs
  • Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
  • Experience with scripting languages. Python experience highly desirable.
  • 2+ Industry experience in python
  • Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
  • Demonstrated expertise of building cloud native applications
  • Experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd
  • Experience in API development using Swagger
  • Strong expertise with containerization technologies including kubernetes, docker-compose
  • Experience with cloud platform services such as AWS, Azure or GCP.
  • Implementing automated testing platforms and unit tests
  • Proficient understanding of code versioning tools, such as Git
  • Familiarity with continuous integration, Jenkins
Responsibilities
  • Design and Implement Large scale data processing pipelines using Kafka, Fluentd and Druid
  • Assist in dev ops operations
  • Develop data ingestion processes and ETLs
  • Design and Implement APIs
  • Assist in dev ops operations
  • Identify performance bottlenecks and bugs, and devise solutions to these problems
  • Help maintain code quality, organization, and documentation
  • Communicate with stakeholders regarding various aspects of solution.
  • Mentor team members on best practices
Read more
MedCords
at MedCords
6 recruiters
kavita jain
Posted by kavita jain
Kota
0 - 1 yrs
₹1L - ₹2.5L / yr
skill iconData Analytics
Data Analyst
skill iconR Language
skill iconPython

Required Python ,R

work in handling large-scale data engineering pipelines.
Excellent verbal and written communication skills.
Proficient in PowerPoint or other presentation tools.
Ability to work quickly and accurately on multiple projects.

Read more
netmedscom
at netmedscom
3 recruiters
Vijay Hemnath
Posted by Vijay Hemnath
Chennai
2 - 5 yrs
₹6L - ₹25L / yr
Big Data
Hadoop
Apache Hive
skill iconScala
Spark
+12 more

We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Roles and Responsibilities:

  • Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
  • Develop programs in Scala and Python as part of data cleaning and processing.
  • Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.  
  • Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
  • Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
  • Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Provide high operational excellence guaranteeing high availability and platform stability.
  • Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Skills:

  • Experience with Big Data pipeline, Big Data analytics, Data warehousing.
  • Experience with SQL/No-SQL, schema design and dimensional data modeling.
  • Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
  • Experience in designing systems that process structured as well as unstructured data at large scale.
  • Experience in AWS/Spark/Java/Scala/Python development.
  • Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
  • Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
  • Prior exposure to streaming data sources such as Kafka.
  • Should have knowledge on Shell Scripting and Python scripting.
  • High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
  • Experience with NoSQL databases such as Cassandra / MongoDB.
  • Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
  • Experience building and deploying applications on on-premise and cloud-based infrastructure.
  • Having a good understanding of machine learning landscape and concepts. 

 

Qualifications and Experience:

Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.

Certifications:

Good to have at least one of the Certifications listed here:

    AZ 900 - Azure Fundamentals

    DP 200, DP 201, DP 203, AZ 204 - Data Engineering

    AZ 400 - Devops Certification

Read more
US Healthcare
US Healthcare
Agency job
via turtlebowl by swati m
Bengaluru (Bangalore)
4 - 8 yrs
₹10L - ₹11L / yr
skill iconData Analytics
Relational Database (RDBMS)
Dashboard Manager
Reporting
Trend analysis
+1 more
About - 

 
Relevant Years of Exp  - Minimum 4-8 years of experience in data analysis, data reporting, identify and analyze pattern/trends.
 
Knowledge Skill Sets -
 
§ Experience with Tableau dashboard 

§ Careful and attentive to details 

§ Willing and eager to call out mistakes 

§ Beginner to intermediate knowledge of relational databases, reporting, business intelligence 

§ Professional communicator 

§ Inquisitive/Curious readily asking questions about anything that doesn’t make sense or feels right 

§ Good interpersonal skills with a proven ability to communicate effectively (both written and verbal). 

§ Well-developed skill in MS Excel 

§ Displays awareness of the need for confidentially in sensitive matters. 

§ Eye for detailing.
 
Role Description - 
 
§ Execute tasks assigned by reporting manager and/or Bedford SPOC 

§ Identify, analyze, and interpret trends or patterns 

§ Audit and report discrepancies/inconsistencies in Tableau reports/dashboards 

§ Publish weekly/monthly reports in pre-defined format and frequency to reporting manager
 
Job Purpose - 
§ Prepare reports using Tableau for delivery to clients 

§ Adjust parameters and prepare custom reports using previously built dashboards. 

§ Print reports to PDF and deliver to folders on a predetermined schedule 

§ Become familiar with Tableau - our clients, created workbooks, parameters, filters, and databases 

§ QA existing dashboards and look for inconsistencies in naming, filters, charts, tables, etc
 
Note - Looking for Immediate joiner with N.P. of 30 days.
Timing - US shift (6 p.m. - 3.30 a.m.)
Benefits - Transport facility + Night shift Allowance.
Location - Domlur.
Working - 5 days.
 
Reach me ASAP if exploring opportunity with updated resume -
--
Thanks and Regards,
M.Swati 
Associate Consultant
 
#intelligenthiring
India | Singapore 
http://www.turtlebowl.com/" target="_blank">www.turtlebowl.com 
Read more
Mumbai, Pune
0 - 3 yrs
₹0L / yr
Business Development
skill iconData Analytics
Client Servicing
Sales
Presales
+1 more
Helping team to source good start ups that are looking to raise funds. Preliminary basic understanding and evaluation of the business start ups. Co-coordinating and scheduling communication slots with founders of startups Following up for information sought from start ups. Maintaining records of the startups being evaluated selected and rejected. Work in sync with the analyst’s team and provide them with back office support. Any other tasks that may come up and needed to be done by the team from time to time.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos