Cutshort logo
Stackless python jobs

Stackless Python Jobs

Explore top Stackless Python Job opportunities from Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

icon
TechUnity Software Systems India Pvt Ltd;
Coimbatore
2 - 5 yrs
₹3L - ₹4L / yr
Data Visualization
SQL
Stackless Python
R Programming
matplotlib
+4 more

We are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help improve our company.
We will rely on you to build data products to extract valuable business insights.
In this role, you should be highly analytical with a knack for analysis, math and statistics.
Critical thinking and problem-solving skills are essential for interpreting data.
We also want to see a passion for machine-learning and research.
Your goal will be to help our company analyze trends to make better decisions.
Responsibilities:

  • Identify valuable data sources and automate collection processes
  • Undertake preprocessing of structured and unstructured data
  • Analyze large amounts of information to discover trends and patterns
  • Build predictive models and machine-learning algorithms
  • Combine models through ensemble modeling
  • Present information using data visualization techniques
  • Propose solutions and strategies to business challenges
  • Collaborate with engineering and product development teams

Requirements:

  • Proven experience as a Data Scientist or Data Analyst
  • Experience in data mining
  • Understanding of machine-learning and operations research
  • Knowledge of SQL,Python,R,ggplot2, matplotlib, seaborn, Shiny, Dash; familiarity with Scala, Java or C++ is an asset
  • Experience using business intelligence tools (e.g. Tableau) and data frameworks
  • Analytical mind and business acumen
  • Strong math skills in statistics, algebra
  • Problem-solving aptitude
  • Excellent communication and presentation skills
  • BSc/BE in Computer Science, Engineering or relevant field;
  • graduate degree in Data Science or other quantitative field is preferred
Read more
Fxbytes technologies

at Fxbytes technologies

1 recruiter
Shweta Bharti
Posted by Shweta Bharti
Vijay Nagar Indore
3 - 6 yrs
₹2L - ₹8L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+4 more

Seeking Data Analytics Trainer with Power BI and Tableau Expertise

Experience Required: Minimum 3 Years

Location: Indore

Part-Time / Full-Time Availability


We are actively seeking a qualified candidate to join our team as a Data Analytics Trainer, with a strong focus on Power BI and Tableau expertise. The ideal candidate should possess the following qualifications:


  A track record of 3 to 6 years in delivering technical training and mentoring.

  Profound understanding of Data Analytics concepts.

  Strong proficiency in Excel and Advanced Excel.

  Demonstrated hands-on experience and effective training skills in Python, Data Visualization, R Programming, and an in-depth understanding of both Power BI and Tableau.


Follow me on LinkedIn to get more job updates 👇


https://www.linkedin.com/in/shweta-bharti-a105ab197/

Read more
A Leading Edtech Company
Noida
3 - 6 yrs
₹12L - ₹15L / yr
MongoDB
MySQL
SQL
  • Sound knowledge of Mongo as a primary skill
  • . Should have hands on experience of  MySQL as a secondary skill will be enough
  • . Experience with replication , sharding and scaling.
  • . Design, install, maintain highly available systems (includes monitoring, security, backup, and performance tuning)
  • . Implement secure database and server installations (privilege access methodology / role based access)
  • . Help application team in query writing, performance tuning & other D2D issues
  • • Deploy automation techniques for d2d operations
  • . Must possess good analytical and problem solving skills
  • . Must be willing to work flexible hours as needed
  • . Scripting experience a plus
  • . Ability to work independently and as a member of a team
  • . good verbal and written communication skills
Read more
RandomTrees

at RandomTrees

1 recruiter
Amareswarreddt yaddula
Posted by Amareswarreddt yaddula
Remote only
5 - 10 yrs
₹1L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+6 more

Job Title: Senior Data Engineer

Experience: 8Yrs to 11Yrs

Location: Remote

Notice: Immediate or Max 1Month

Role: Permanent Role


Skill set: Google Cloud Platform, Big Query, Java, Python Programming Language, Airflow, Data flow, Apache Beam.


Experience required:

5 years of experience in software design and development with 4 years of experience in the data engineering field is preferred.

2 years of Hands-on experience in GCP cloud data implementation suites such as Big Query, Pub Sub, Data Flow/Apache Beam, Airflow/Composer, Cloud Storage, etc.

Strong experience and understanding of very large-scale data architecture, solutions, and operationalization of data warehouses, data lakes, and analytics platforms.

Mandatory 1 year of software development skills using Java or Python.

Extensive hands-on experience working with data using SQL and Python.


Must Have: GCP, Big Query, Airflow, Data flow, Python, Java.


GCP knowledge must

Java as programming language(preferred)

Big Query, Pub-Sub, Data Flow/Apache Beam, Airflow/Composer, Cloud Storage,

Python

Communication should be good.


Read more
Graasai
Vineet A
Posted by Vineet A
Pune
3 - 7 yrs
₹10L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+9 more

Graas uses predictive AI to turbo-charge growth for eCommerce businesses. We are “Growth-as-a-Service”. Graas is a technology solution provider using predictive AI to turbo-charge growth for eCommerce businesses. Graas integrates traditional data silos and applies a machine-learning AI engine, acting as an in-house data scientist to predict trends and give real-time insights and actionable recommendations for brands. The platform can also turn insights into action by seamlessly executing these recommendations across marketplace store fronts, brand.coms, social and conversational commerce, performance marketing, inventory management, warehousing, and last mile logistics - all of which impacts a brand’s bottom line, driving profitable growth.


Roles & Responsibilities:

Work on implementation of real-time and batch data pipelines for disparate data sources.

  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies.
  • Build and maintain an analytics layer that utilizes the underlying data to generate dashboards and provide actionable insights.
  • Identify improvement areas in the current data system and implement optimizations.
  • Work on specific areas of data governance including metadata management and data quality management.
  • Participate in discussions with Product Management and Business stakeholders to understand functional requirements and interact with other cross-functional teams as needed to develop, test, and release features.
  • Develop Proof-of-Concepts to validate new technology solutions or advancements.
  • Work in an Agile Scrum team and help with planning, scoping and creation of technical solutions for the new product capabilities, through to continuous delivery to production.
  • Work on building intelligent systems using various AI/ML algorithms. 

 

Desired Experience/Skill:

 

  • Must have worked on Analytics Applications involving Data Lakes, Data Warehouses and Reporting Implementations.
  • Experience with private and public cloud architectures with pros/cons.
  • Ability to write robust code in Python and SQL for data processing. Experience in libraries such as Pandas is a must; knowledge of one of the frameworks such as Django or Flask is a plus.
  • Experience in implementing data processing pipelines using AWS services: Kinesis, Lambda, Redshift/Snowflake, RDS.
  • Knowledge of Kafka, Redis is preferred
  • Experience on design and implementation of real-time and batch pipelines. Knowledge of Airflow is preferred.
  • Familiarity with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn)
Read more
Cubera Tech India Pvt Ltd
Surabhi Koushik
Posted by Surabhi Koushik
Bengaluru (Bangalore)
2 - 3 yrs
₹24L - ₹35L / yr
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
SQL
+6 more

Data Scientist

 

Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.

 

What you’ll do?

 

  • Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models.
  • Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation, and serving.
  • Research new and innovative machine learning approaches.
  • Perform hands-on analysis and modeling of enormous data sets to develop insights that increase Ad Traffic and Campaign Efficacy.
  • Collaborate with other data scientists, data engineers, product managers, and business stakeholders to build well-crafted, pragmatic data products.  
  • Actively take on new projects and constantly try to improve the existing models and infrastructure necessary for offline and online experimentation and iteration.
  • Work with your team on ambiguous problem areas in existing or new ML initiatives

What are we looking for?

  • Ability to write a SQL query to pull the data you need.
  • Fluency in Python and familiarity with its scientific stack such as numpy, pandas, scikit learn, matplotlib.
  • Experience in Tensorflow and/or R Modelling and/or PyTorch
  • Ability to understand a business problem and translate, and structure it into a data science problem. 

 

Job Category: Data Science

Job Type: Full Time

Job Location: Bangalore

 

Read more
Deep-Rooted.co (formerly Clover)

at Deep-Rooted.co (formerly Clover)

6 candid answers
1 video
Likhithaa D
Posted by Likhithaa D
Bengaluru (Bangalore)
3 - 6 yrs
₹12L - ₹15L / yr
Java
Python
SQL
AWS Lambda
HTTP
+5 more

Deep-Rooted.Co is on a mission to get Fresh, Clean, Community (Local farmer) produce from harvest to reach your home with a promise of quality first! Our values are rooted in trust, convenience, and dependability, with a bunch of learning & fun thrown in.


Founded out of Bangalore by Arvind, Avinash, Guru and Santosh, with the support of our Investors Accel, Omnivore & Mayfield, we raised $7.5 million in Seed, Series A and Debt funding till date from investors include ACCEL, Omnivore, Mayfield among others. Our brand Deep-Rooted.Co which was launched in August 2020 was the first of its kind as India’s Fruits & Vegetables (F&V) which is present in Bangalore & Hyderabad and on a journey of expansion to newer cities which will be managed seamlessly through Tech platform that has been designed and built to transform the Agri-Tech sector.


Deep-Rooted.Co is committed to building a diverse and inclusive workplace and is an equal-opportunity employer.  

How is this possible? It’s because we work with smart people. We are looking for Engineers in Bangalore to work with thehttps://www.linkedin.com/in/gururajsrao/"> Product Leader (Founder) andhttps://www.linkedin.com/in/sriki77/"> CTO and this is a meaningful project for us and we are sure you will love the project as it touches everyday life and is fun. This will be a virtual consultation.


We want to start the conversation about the project we have for you, but before that, we want to connect with you to know what’s on your mind. Do drop a note sharing your mobile number and letting us know when we can catch up.

Purpose of the role:

* As a startup we have data distributed all across various sources like Excel, Google Sheets, Databases etc. We need swift decision making based a on a lot of data that exists as we grow. You help us bring together all this data and put it in a data model that can be used in business decision making.
* Handle nuances of Excel and Google Sheets API.
* Pull data in and manage it growth, freshness and correctness.
* Transform data in a format that aids easy decision-making for Product, Marketing and Business Heads.
* Understand the business problem, solve the same using the technology and take it to production - no hand offs - full path to production is yours.

Technical expertise:
* Good Knowledge And Experience with Programming languages - Java, SQL,Python.
* Good Knowledge of Data Warehousing, Data Architecture.
* Experience with Data Transformations and ETL; 
* Experience with API tools and more closed systems like Excel, Google Sheets etc.
* Experience AWS Cloud Platform and Lambda
* Experience with distributed data processing tools.
* Experiences with container-based deployments on cloud.

Skills:
Java, SQL, Python, Data Build Tool, Lambda, HTTP, Rest API, Extract Transform Load.
Read more
Amazon India

at Amazon India

1 video
58 recruiters
Tanya Thakur
Posted by Tanya Thakur
Chennai
5 - 12 yrs
₹10L - ₹22L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+5 more

BASIC QUALIFICATIONS

 

  • 2+ years experience in program or project management
  • Project handling experience using six sigma/Lean processes
  • Experience interpreting data to make business recommendations

  • Bachelor’s degree or higher in Operations, Business, Project Management, Engineering
  • 5-10 years' experience in project / Customer Satisfaction, with proven success record
  • Understand basic and systematic approaches to manage projects/programs
  • Structured problem solving approach to identify & fix problems
  • Open-minded, creative and proactive thinking
  • Pioneer to invent and make differences
  • Understanding of customer experience, listening to customers' voice and work backwards to improve business process and operations
  • Certification in 6 Sigma

 

PREFERRED QUALIFICATIONS

 

  • Automation Skills with experience in Advance SQL, Python, Tableau
Read more
Magic9 Media and Consumer Knowledge Pvt. Ltd.
Mumbai
3 - 5 yrs
₹7L - ₹12L / yr
ETL
SQL
Python
Statistical Analysis
Machine Learning (ML)
+4 more

Job Description

This requirement is to service our client which is a leading big data technology company that measures what viewers consume across platforms to enable marketers make better advertising decisions. We are seeking a Senior Data Operations Analyst to mine large-scale datasets for our client. Their work will have a direct impact on driving business strategies for prominent industry leaders. Self-motivation and strong communication skills are both must-haves. Ability to work in a fast-paced work environment is desired.


Problems being solved by our client: 

Measure consumer usage of devices linked to the internet and home networks including computers, mobile phones, tablets, streaming sticks, smart TVs, thermostats and other appliances. There are more screens and other connected devices in homes than ever before, yet there have been major gaps in understanding how consumers interact with this technology. Our client uses a measurement technology to unravel dynamics of consumers’ interactions with multiple devices.


Duties and responsibilities:

  • The successful candidate will contribute to the development of novel audience measurement and demographic inference solutions. 
  • Develop, implement, and support statistical or machine learning methodologies and processes. 
  • Build, test new features and concepts and integrate into production process
  • Participate in ongoing research and evaluation of new technologies
  • Exercise your experience in the development lifecycle through analysis, design, development, testing and deployment of this system
  • Collaborate with teams in Software Engineering, Operations, and Product Management to deliver timely and quality data. You will be the knowledge expert, delivering quality data to our clients

Qualifications:

  • 3-5 years relevant work experience in areas as outlined below
  • Experience in extracting data using SQL from large databases
  • Experience in writing complex ETL processes and frameworks for analytics and data management. Must have experience in working on ETL tools.
  • Master’s degree or PhD in Statistics, Data Science, Economics, Operations Research, Computer Science, or a similar degree with a focus on statistical methods. A Bachelor’s degree in the same fields with significant, demonstrated professional research experience will also be considered. 
  • Programming experience in scientific computing language (R, Python, Julia) and the ability to interact with relational data (SQL, Apache Pig, SparkSQL). General purpose programming (Python, Scala, Java) and familiarity with Hadoop is a plus.  
  • Excellent verbal and written communication skills. 
  • Experience with TV or digital audience measurement or market research data is a plus. 
  • Familiarity with systems analysis or systems thinking is a plus. 
  • Must be comfortable with analyzing complex, high-volume and high-dimension data from varying sources
  • Excellent verbal, written and computer communication skills
  • Ability to engage with Senior Leaders across all functional departments
  • Ability to take on new responsibilities and adapt to changes

 

Read more
Fragma Data Systems

at Fragma Data Systems

8 recruiters
Harpreet kour
Posted by Harpreet kour
Bengaluru (Bangalore)
1 - 6 yrs
₹10L - ₹15L / yr
Data engineering
Big Data
PySpark
SQL
Python
 Good experience in Pyspark - Including Dataframe core functions and Spark SQL
Good experience in SQL DBs - Be able to write queries including fair complexity.
Should have excellent experience in Big Data programming for data transformation and aggregations
Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
 Good customer communication.
 Good Analytical skills
Read more
MNC

at MNC

Agency job
via Fragma Data Systems by Priyanka U
Chennai
1 - 5 yrs
₹6L - ₹12L / yr
Data Science
Natural Language Processing (NLP)
Data Scientist
R Programming
Python
Skills
  • Python coding skills
  • Scikit-learn, pandas, tensorflow/keras experience
  • Machine learning: designing ml models and explaining them for regression, classification, dimensionality reduction, anomaly detection etc
  • Implementing Machine learning models and pushing it to production 
  • Creating docker images for ML models, REST API creation in Python
1) Data scientist with NLP experience
  • Additional Skills Compulsory:
    • Knowledge and professional experience of text and NLP related projects such as - text classification, text summarization, topic modeling etc
2) Data scientist with Computer vision for documents experience
  • Additional Skills Compulsory:
    • Knowledge and professional experience of vision and deep learning for documents - CNNs, Deep neural networks using tensorflow for Keras for object detection, OCR implementation, document extraction etc
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort