About Davis Index
Davis Index is a market intelligence platform and publication that provides price benchmarks for recycled materials and primary metals.
Our team of dedicated reporters, analysts, and data specialists publish and process over 1,400 proprietary price indexes, metals futures prices, and other reference data including market intelligence, news, and analysis through an industry-leading technology platform.
About the role
Here at Davis Index, we look to bring true, accurate market insights, news and data to the recycling industry. This enables sellers and buyers to boost their margins, and access daily market intelligence, data analytics, and news.
We’re looking for a keen data expert who will take on a high-impact role that focuses on end-to-end data management, BI and analysis tasks within a specific functional area or data type. If taking on challenges in building, extracting, refining and very importantly automating data processes is something you enjoy doing, apply to us now!
Key Role
Data visualization - Power BI, Tableau,Python
DB Management - SQL, MangoDB,
Data collection, Cleaning, Modelling , Analysis
Programming Languages and Tools: Python, R, VBA, Appscript, Excel, Google sheets
What you will do in this role
- Build and maintain data pipelines from internal databases.
- Data mapping of data elements between source and target systems.
- Create data documentation including mappings and quality thresholds.
- Build and maintain analytical SQL/MongoDB queries, scripts.
- Build and maintain Python scripts for data analysis/cleaning/structuring.
- Build and maintain visualizations; delivering voluminous information in comprehensible forms or in ways that make it simple to recognise patterns, trends, and correlations.
- Identify and develop data quality initiatives and opportunities for automation.
- Investigate, track, and report data issues.
- Utilize various data workflow management and analysis tools.
- Ability and desire to learn new processes, tools, and technologies.
- Understanding fundamental AI and ML concepts.
Must have experience and qualifications
- Bachelor's degree in Computer Science, Engineering, or Data related field required.
- 2+ years’ experience in data management.
- Advanced proficiency with Microsoft Excel and VBA/ Google sheets and AppScript
- Proficiency with MongoDB/SQL.
- Familiarity with Python for data manipulation and process automation preferred.
- Proficiency with various data types and formats including, but not limited to JSON.
- Intermediate proficiency with HTML/CSS.
- Data-driven strategic planning
- Strong background in data analysis, data reporting, and data management coupled with the adept process mapping and improvements.
- Strong research skills.
- Attention to detail.
What you can expect
Work closely with a global team helping bring market intelligence to the recycling world. As a part of the Davis Index team we look to foster relationships and help you grow with us. You can also expect:
- Work with leading minds from the recycling industry and be part of a growing, energetic global team
- Exposure to developments and tools within your field ensures evolution in your career and skill building along with competitive compensation.
- Health insurance coverage, paid vacation days and flexible work hours helping you maintain a work-life balance
- Have the opportunity to network and collaborate in a diverse community
Apply Directly using this link : https://nyteco.keka.com/careers/jobdetails/54122
About Nyteco
About
Nyteco Inc is a green tech venture for the recycled materials industry and manufacturing supply chain.
We serve the industry through our flagship company - Jules AI.
Tech stack
Company video
Candid answers by the company
Nyteco aims to bring leading tech solutions to the recycling industry to help grow its trading business, connect with one another and much more!
Product showcase
Connect with the team
Similar jobs
About PrintStop Our vision is 'to transform the procurement processes of large enterprises and make happy and efficient workplaces' We are a 16-year-old bootstrapped company, profitable and growing year on year
with a team size of 120. We have carved our niche in the printing and corporate gifting industry with our products and technology-enabled platform solutions (Mandaala). We sell products that include:
1. printed products like business cards, brochures, standees, etc and
2. branded merchandise products: We have logo branding capabilities using technologies like Laser engraving, UV printing, embroidery, etc on 40+ product categories like apparel, bottles, pens, backpacks, etc
Mandaala by PrintStop is a SaaS-based platform for Enterprises and it automates the order-to-delivery workflows for branded merchandise, onboarding kits, stationery, marketing collateral, corporate gifts, and a lot more. Using Mandaala’s digital portals for specific products, large organizations automate the manual effort, eliminate errors in artworks, enable inbuilt approvals, and get complete visibility of spending, thus reducing the effort required by almost 80%. We serve 2 customer segments namely SMEs (to know more visit https://www.printstop.co.in/) and large Enterprises (to know more visit
https://www.mandaala.com/) like Dr. Reddys, Infosys, Tech M, HDFC Life, Deloitte We are an ISO 9001, ISO 27001 and a Great Place to Work Certified company. Our average employee tenure is 5 years. Our people love to grow with us :)
Role Enterprise Sales Manager
The primary responsibility of this role is to convert new customers for Mandaala (the Enterprise division). This typically involves researching and reaching out to prospective companies; pitching our products and solutions and converting them. (our target market is India's top 1000 companies). The role entails engaging with senior (VP and above) stakeholders in the HR, Admin, and Marketing departments of these companies.
Location Mumbai, Pune, Chennai, Hyderabad
Key Responsibilities :
1. Research prospects in target companies
2. Reach out to senior stakeholders using various methods such as emails, social media (LinkedIn), and calls to generate meetings
3. Present and pitch Printstop's products and solutions
4. Engage and build relations with key stakeholders to be able to convert and generate new business.
5. Upsell/cross-sell to penetrate the converted account to increase wallet share
6. Manage and keep the sales data updated in an accurate and timely manner in the CRM
7. Follow the sales processes and participate in ongoing sales and product training
Exposure to working in HRTech, B2B SaaS or gifting/printing company is a plus.
Skillsets:
Primary
1. Self-starter / Self-drive
2. Self Confidence (Senior stakeholder management)
3. Persuasion
4. Business Acumen: Understand customer - product - value proposition
5. Solution / Value-based selling
6. Executive Level Communication (spoken, verbal, listening)
7. Strong attention to detail
8. Strong project/task management
9. Strong time management skills with the ability to multitask
Secondary
1. Negotiation skills
2. Using sales tools, AI & CRM
3. Presentation skills
Your actions will act as a catalyst in revolutionising the digital footprint of 200 million micro businesses worldwide.
Our ideal candidate for this role is an entrepreneurial and creative out-of-the-box thinking tech geek with a get-things-done attitude, who wants to thrive in a fast-paced international environment.
If you are qualify and are excited about joining us, read about our functional asks below.
- Participate in the entire product development lifecycle, focusing on coding and debugging.
- Translate high level business problems into scalable solutions. These include building self learning modules for dynamic pricing, contextual recommendations, in-house analytics, advanced real-time backend systems, etc.
- Develop Web APIs and end-to-end web services to support various internal & external.
- Develop unit test plans to deliver quality components.
- Building reusable code and libraries for future use.
- Optimization of the software platform for maximum speed and scalability.
- Collaborating with the front-end developers and other team members to establish objectives and design more functional, cohesive codes to enhance the user experience.
- Developing ideas for new programs, products, or features by monitoring industry developments and trends.
- Perform UI tests to optimize performance.
- Provide training and support to internal teams.
- Follow emerging technologies.
Desired Candidate Profile
- Expertise in PHP and Laravel or CodeIgnitor, MySQL or PostgreSQL, Nginx, LEMP setup
- Developing REST APIs & integrating 3rd party APIs (oAuth 2.0), Git
- Hands-on experience in application deployment on linux servers
- You can easily navigate volatile environment where you are constantly getting challenged to push your own boundaries to make sense of ambiguous and complex signals of uncertainty into simple winning outcomes.
- What separates you from other developers is your relentless drive for running new tests and experiments on a regular basis. You love programming, but even more, you actually love implementing changes based on user data and create highly scalable technology based solutions.
Perks and Benefits
A crazy ambitious role with potential for sky-rocketing growth, global exposure and immense monetary upside within a fast-growing start-up.
About Company
Websites.co.in is considered to be the world's easiest website builder platform that lets micro business owners create their website within 15 minutes and helps them grow their business online.
With the vision to champion the world’s 200 million micro business owners, Websites.co.in was started in Mumbai, India in 2017.
Since then, over a 1 million+ people from 200+ countries engaged with websites.co.in in 98 world languages to create their website. The 4.5 out of 5 stars with over 11000 reviews is testimony that we are on the right track.
CRO & Funnel Optimization
Experience: 1-4y
Remote Mode of Work
Budget: 9 LPA
Skills:
B2B SAAS Marketer, Web Analytics, A/B Testing, Data Driven Tools (Google Analytics, Hubspot, SEMrush, Zoho Campaign, Google tag manager).
Roles & Responsibilities:
- Craft and implement data-driven CRO strategies, elevating website performance and boosting conversion rates.
- Collaborate with SEO & PPC teams, breathing life into underperforming blogs/landing pages for improved SERP and MQLs.
- Conduct qualitative & quantitative research, uncovering insights to enhance website metrics and MQLs.
- Team up with Content, Sales, & PMM to align messaging with customer pain points and keyword intent.
- Bring design/layout changes to life with the designer and developer, optimizing user flows.
- Execute A/B and multivariate tests, recording experiments and tracking their impact on conversion metrics.
- Collaborate with the sales team to revolutionize downstream conversions, devising creative plans for MQLs to SQLs and enhanced closed-won percentages.
Requirements
- 1-4 years of CRO experience in B2B SaaS marketing.
- Strong communication & interpersonal skills.
- A/B testing and optimization proficiency.
- Understanding of user experience and content flow.
- Readiness for remote work.
- Project management skills, mastering multitasking and meeting deadlines.
- Strong analytical & presentation skills.
- Critical thinking & problem-solving abilities.
- A relevant marketing degree or CXL certification in CRO is a plus.
- Big data developer with 8+ years of professional IT experience with expertise in Hadoop ecosystem components in ingestion, Data modeling, querying, processing, storage, analysis, Data Integration and Implementing enterprise level systems spanning Big Data.
- A skilled developer with strong problem solving, debugging and analytical capabilities, who actively engages in understanding customer requirements.
- Expertise in Apache Hadoop ecosystem components like Spark, Hadoop Distributed File Systems(HDFS), HiveMapReduce, Hive, Sqoop, HBase, Zookeeper, YARN, Flume, Pig, Nifi, Scala and Oozie.
- Hands on experience in creating real - time data streaming solutions using Apache Spark core, Spark SQL & DataFrames, Kafka, Spark streaming and Apache Storm.
- Excellent knowledge of Hadoop architecture and daemons of Hadoop clusters, which include Name node,Data node, Resource manager, Node Manager and Job history server.
- Worked on both Cloudera and Horton works in Hadoop Distributions. Experience in managing Hadoop clustersusing Cloudera Manager tool.
- Well versed in installation, Configuration, Managing of Big Data and underlying infrastructure of Hadoop Cluster.
- Hands on experience in coding MapReduce/Yarn Programs using Java, Scala and Python for analyzing Big Data.
- Exposure to Cloudera development environment and management using Cloudera Manager.
- Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle .
- Implemented Spark using PYTHON and utilizing Data frames and Spark SQL API for faster processing of data and handled importing data from different data sources into HDFS using Sqoop and performing transformations using Hive, MapReduce and then loading data into HDFS.
- Used Spark Data Frames API over Cloudera platform to perform analytics on Hive data.
- Hands on experience in MLlib from Spark which are used for predictive intelligence, customer segmentation and for smooth maintenance in Spark streaming.
- Experience in using Flume to load log files into HDFS and Oozie for workflow design and scheduling.
- Experience in optimizing MapReduce jobs to use HDFS efficiently by using various compression mechanisms.
- Working on creating data pipeline for different events of ingestion, aggregation, and load consumer response data into Hive external tables in HDFS location to serve as feed for tableau dashboards.
- Hands on experience in using Sqoop to import data into HDFS from RDBMS and vice-versa.
- In-depth Understanding of Oozie to schedule all Hive/Sqoop/HBase jobs.
- Hands on expertise in real time analytics with Apache Spark.
- Experience in converting Hive/SQL queries into RDD transformations using Apache Spark, Scala and Python.
- Extensive experience in working with different ETL tool environments like SSIS, Informatica and reporting tool environments like SQL Server Reporting Services (SSRS).
- Experience in Microsoft cloud and setting cluster in Amazon EC2 & S3 including the automation of setting & extending the clusters in AWS Amazon cloud.
- Extensively worked on Spark using Python on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL.
- Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
- Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions and on Amazon web services (AWS).
- Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
- Experience in creating Data frames using PySpark and performing operation on the Data frames using Python.
- In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS and MapReduce Programming Paradigm, High Availability and YARN architecture.
- Establishing multiple connections to different Redshift clusters (Bank Prod, Card Prod, SBBDA Cluster) and provide the access for pulling the information we need for analysis.
- Generated various kinds of knowledge reports using Power BI based on Business specification.
- Developed interactive Tableau dashboards to provide a clear understanding of industry specific KPIs using quick filters and parameters to handle them more efficiently.
- Well Experience in projects using JIRA, Testing, Maven and Jenkins build tools.
- Experienced in designing, built, and deploying and utilizing almost all the AWS stack (Including EC2, S3,), focusing on high-availability, fault tolerance, and auto-scaling.
- Good experience with use-case development, with Software methodologies like Agile and Waterfall.
- Working knowledge of Amazon's Elastic Cloud Compute( EC2 ) infrastructure for computational tasks and Simple Storage Service ( S3 ) as Storage mechanism.
- Good working experience in importing data using Sqoop, SFTP from various sources like RDMS, Teradata, Mainframes, Oracle, Netezza to HDFS and performed transformations on it using Hive, Pig and Spark .
- Extensive experience in Text Analytics, developing different Statistical Machine Learning solutions to various business problems and generating data visualizations using Python and R.
- Proficient in NoSQL databases including HBase, Cassandra, MongoDB and its integration with Hadoop cluster.
- Hands on experience in Hadoop Big data technology working on MapReduce, Pig, Hive as Analysis tool, Sqoop and Flume data import/export tools.
· Looking after the day to day functions of management
· Answering HR requests and questions
· Update and keep employee record in check
· Conducting recruitment and interview process
· Responsible for analyzing training needs,
· Maintain the HR team’s calendar (schedule meetings, interviews, HR events etc)
· Building records of the employee
· Tracking Employee Performance
· Analyzing the Data got from Employee work performance.
· Designing and implementing an overall recruiting strategy
· Consulting with managers to discover specific job objectives
· Contact new employees and prepare onboarding sessions
MLOps Engineer
Required Candidate profile :
- 3+ years’ experience in developing continuous integration and deployment (CI/CD) pipelines (e.g. Jenkins, Github Actions) and bringing ML models to CI/CD pipelines
- Candidate with strong Azure expertise
- Exposure of Productionize the models
- Candidate should have complete knowledge of Azure ecosystem, especially in the area of DE
- Candidate should have prior experience in Design, build, test, and maintain machine learning infrastructure to empower data scientists to rapidly iterate on model development
- Develop continuous integration and deployment (CI/CD) pipelines on top of Azure that includes AzureML, MLflow and Azure Devops
- Proficient knowledge of git, Docker and containers, Kubernetes
- Familiarity with Terraform
- E2E production experience with Azure ML, Azure ML Pipelines
- Experience in Azure ML extension for Azure Devops
- Worked on Model Drift (Concept Drift, Data Drift preferable on Azure ML.)
- Candidate will be part of a cross-functional team that builds and delivers production-ready data science projects. You will work with team members and stakeholders to creatively identify, design, and implement solutions that reduce operational burden, increase reliability and resiliency, ensure disaster recovery and business continuity, enable CI/CD, optimize ML and AI services, and maintain it all in infrastructure as code everything-in-version-control manner.
- Candidate with strong Azure expertise
- Candidate should have complete knowledge of Azure ecosystem, especially in the area of DE
- Candidate should have prior experience in Design, build, test, and maintain machine learning infrastructure to empower data scientists to rapidly iterate on model development
- Develop continuous integration and deployment (CI/CD) pipelines on top of Azure that includes AzureML, MLflow and Azure Devops
- Bachelor of Engineering or Masters Degree in Computer Science
- MEAN stack developer with
- Experience in software development, prototyping, functional analysis, integration, and testing.
- Experience with Javascript and RESTful technologies including building RESTful APIs
- In-depth understanding of the JavaScript programming languages especially Angular and node.js and the ability to peer review other engineer's code constructively
- Highly skilled at problem-solving, unit-testing, and debugging.
- Expert understanding of best practice engineering principles for building MEAN applications.
- In-depth knowledge of database systems and an understanding of data structures, data normalization, and query performance considerations. Particularly Non-Relational/NoSQL databases including MongoDB.
- Automated unit testing and CI experience
- Ability to effectively communicate technical concepts pictorially, orally or in writing.
- Docker experience desired
- Ability to learn business rules quickly by reading requirements, engaging in conversation, or reverse engineering.
- Expertise in source code control and versioning concepts.
Responsibilities
• 3-4 years of experience developing user-facing applications using Vue.js
• Building modular and reusable components and libraries
• Optimizing your application for performance
• Implementing automated testing integrated into development and
maintenance workflows
• Staying up-to-date with all recent developments in the JavaScript and
Vue.js space
• Keeping an eye on security updates and issues found with Vue.js and all
project dependencies
• Proposing any upgrades and updates necessary for keeping up with
modern security and development best practices
•
Requirements
• Highly proficient with the JavaScript language and its modern ES6+
syntax and features
• Highly proficient with Vue.js framework and its core principles such as
components, reactivity, and the virtual DOM
• Familiarity with the Vue.js ecosystem, including Vue CLI, Vuex, Vue
Router, and Nuxt.js
• Good understanding of HTML5 and CSS3, including ( Sass or Less
depending on your technology stack )
• Understanding of server-side rendering and its benefits and use cases
• Knowledge of functional programming and object-oriented programming
paradigms
• Ability to write efficient, secure, well-documented, and clean JavaScript
code
• Familiarity with automated JavaScript testing, specifically testing
frameworks such as Jest or Mocha
• Proficiency with modern development tools, like Babel, Webpack, and Git
• Experience with both consuming and designing RESTful APIs
• Experience working on Quasar Framework
• Proficiency in TypeScript and Nodejs is a must
Qualifications : BE/BTech, ME/MTech, MCA
DemandMatrix Inc. is a data company that provides Go To Market, Operations and Data Science teams with high quality company level data and intelligence. DemandMatrix uses advanced data science methodologies to process millions of unstructured data points that produce reliable and accurate technology intelligence, organizational intent and B2B spend data. Our product solves challenging problems for our clients such as Microsoft, DocuSign, Leadspace and many more.
Job Description
We use machine learning and narrow-AI to find companies and the products they are using. This is done by researching millions of publicly available sources and over a billion documents per month. We are looking for Tech Lead who loves tech challenges and is a problem solver. This will give you an opportunity to brainstorm ideas and implement solutions from scratch.
What will you do?
Will be part of the team responsible for our product roadmap.
You will be involved in rapid prototyping and quick roll-outs of ideas, in fast-paced environments working alongside some of the most talented and smartest people in the industry.
Lead a team and deliver the best output in an agile environment.
Communicate effectively, both orally and in writing with a globally distributed team.
Who Are You?
Designed and built multiple web services and data pipelines with a large volume.
Genuinely excited about technology and worked on projects from scratch.
A highly-motivated individual who thrives in an environment where problems are open-ended.
Must have
- 7+ years of hands-on experience in Software Development with a focus on microservices and data pipelines.
- Minimum 3 years of experience to build services and pipelines using Python.
- Minimum 1 year of experience with a large volume of data using MongoDB.
- Minimum 1 year of hands-on experience with big data pipelines and data warehouse.
- Experience with designing, building & deploying scalable & high available systems with AWS or GCP.
- Experience with Java
- Experience with Docker / Kubernetes
- Exposure to React.js for front-end development
Additional Information
- Flexible Working hours
- Entire Work From Home
- Birthday Leave
- Remote Work