Cutshort logo
Alien Brains logo
Machine Learning Instructor
Machine Learning Instructor
Alien Brains's logo

Machine Learning Instructor

Praveen Baheti's profile picture
Posted by Praveen Baheti
0 - 15 yrs
₹4L - ₹8L / yr
Kolkata
Skills
skill iconPython
skill iconDeep Learning
skill iconMachine Learning (ML)
skill iconData Analytics
skill iconData Science
skill iconR Programming
skill iconAmazon Web Services (AWS)
Data Visualization
You'll be giving industry standard training to engineering students and mentoring them to develop their custom mini projects.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Alien Brains

Founded :
2017
Type
Size :
20-100
Stage :
Bootstrapped
About

We, at Alien Brains started with the belief that the human brain has no bound to what it can learn, limited only by the ifs and buts created by the other human brains around them. In fact, such is the influence on most of them, that their own brain at times becomes alien to them and thus making them forget who they really are and who they can really be few years down the line. The entire team at Alien Brains is dedicated towards helping the innovations of the mind become the reality of tomorrow.

 

While building our proprietary softwares, we also organize workshops, seminars and training programs for both school and college students. Aimed at giving the participants a kick-start in the technical arena, we aspire to unleash the potential of the students of today to make them the innovators of tomorrow.

 

We see and like to call ourselves as a cradle of innovation where we research and try to develop products that matter and also nurture the young talent buds along with us in our research work and product development, thereby giving them firsthand experience.

Read more
Connect with the team
Profile picture
Praveen Baheti
Profile picture
Praveen Baheti
Profile picture
Sourabh Kumar
Profile picture
Shovon Roy
Profile picture
Sourav Singh
Company social profiles
linkedin

Similar jobs

Bengaluru (Bangalore)
6 - 12 yrs
₹25L - ₹35L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
skill iconDeep Learning
+4 more


• 6+ years of data science experience.

• Demonstrated experience in leading programs.

• Prior experience in customer data platforms/finance domain is a plus.

• Demonstrated ability in developing and deploying data-driven products.

• Experience of working with large datasets and developing scalable algorithms.

• Hands-on experience of working with tech, product, and operation teams.


Technical Skills:

• Deep understanding and hands-on experience of Machine learning and Deep

learning algorithms. Good understanding of NLP and LLM concepts and fair

experience in developing NLU and NLG solutions.

• Experience with Keras/TensorFlow/PyTorch deep learning frameworks.

• Proficient in scripting languages (Python/Shell), SQL.

• Good knowledge of Statistics.

• Experience with big data, cloud, and MLOps.

Soft Skills:

• Strong analytical and problem-solving skills.

• Excellent presentation and communication skills.

• Ability to work independently and deal with ambiguity.

Continuous Learning:

• Stay up to date with emerging technologies.


Qualification.


A degree in Computer Science, Statistics, Applied Mathematics, Machine Learning, or any related field / B. Tech.



Read more
Top startup of India -  News App
Noida
2 - 5 yrs
₹20L - ₹35L / yr
Linux/Unix
skill iconPython
Hadoop
Apache Spark
skill iconMongoDB
+4 more
Responsibilities
● Create and maintain optimal data pipeline architecture.
● Assemble large, complex data sets that meet functional / non-functional
business requirements.
● Building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Maintain, organize & automate data processes for various use cases.
● Identifying trends, doing follow-up analysis, preparing visualizations.
● Creating daily, weekly and monthly reports of product KPIs.
● Create informative, actionable and repeatable reporting that highlights
relevant business trends and opportunities for improvement.

Required Skills And Experience:
● 2-5 years of work experience in data analytics- including analyzing large data sets.
● BTech in Mathematics/Computer Science
● Strong analytical, quantitative and data interpretation skills.
● Hands-on experience with Python, Apache Spark, Hadoop, NoSQL
databases(MongoDB preferred), Linux is a must.
● Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Experience with Google Cloud Data Analytics Products such as BigQuery, Dataflow, Dataproc etc. (or similar cloud-based platforms).
● Experience working within a Linux computing environment, and use of
command-line tools including knowledge of shell/Python scripting for
automating common tasks.
● Previous experience working at startups and/or in fast-paced environments.
● Previous experience as a data engineer or in a similar role.
Read more
Information Solution Provider Company
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 7 yrs
₹10L - ₹15L / yr
SQL
Hadoop
Spark
skill iconMachine Learning (ML)
skill iconData Science
+3 more

Job Description:

The data science team is responsible for solving business problems with complex data. Data complexity could be characterized in terms of volume, dimensionality and multiple touchpoints/sources. We understand the data, ask fundamental-first-principle questions, apply our analytical and machine learning skills to solve the problem in the best way possible. 

 

Our ideal candidate

The role would be a client facing one, hence good communication skills are a must. 

The candidate should have the ability to communicate complex models and analysis in a clear and precise manner. 

 

The candidate would be responsible for:

  • Comprehending business problems properly - what to predict, how to build DV, what value addition he/she is bringing to the client, etc.
  • Understanding and analyzing large, complex, multi-dimensional datasets and build features relevant for business
  • Understanding the math behind algorithms and choosing one over another
  • Understanding approaches like stacking, ensemble and applying them correctly to increase accuracy

Desired technical requirements

  • Proficiency with Python and the ability to write production-ready codes. 
  • Experience in pyspark, machine learning and deep learning
  • Big data experience, e.g. familiarity with Spark, Hadoop, is highly preferred
  • Familiarity with SQL or other databases.
Read more
Marktine
at Marktine
1 recruiter
Vishal Sharma
Posted by Vishal Sharma
Remote, Bengaluru (Bangalore)
3 - 6 yrs
₹10L - ₹20L / yr
Big Data
Spark
PySpark
Data engineering
Data Warehouse (DWH)
+5 more

Azure – Data Engineer

  • At least 2 years hands on experience working with an Agile data engineering team working on big data pipelines using Azure in a commercial environment.
  • Dealing with senior stakeholders/leadership
  • Understanding of Azure data security and encryption best practices. [ADFS/ACLs]

Data Bricks –experience writing in and using data bricks Using Python to transform, manipulate data.

Data Factory – experience using data factory in an enterprise solution to build data pipelines. Experience calling rest APIs.

Synapse/data warehouse – experience using synapse/data warehouse to present data securely and to build & manage data models.

Microsoft SQL server – We’d expect the candidate to have come from a SQL/Data background and progressed into Azure

PowerBI – Experience with this is preferred

Additionally

  • Experience using GIT as a source control system
  • Understanding of DevOps concepts and application
  • Understanding of Azure Cloud costs/management and running platforms efficiently
Read more
Angel One
at Angel One
4 recruiters
Andleeb Mujeeb
Posted by Andleeb Mujeeb
Remote only
2 - 6 yrs
₹12L - ₹18L / yr
skill iconAmazon Web Services (AWS)
PySpark
skill iconPython
skill iconScala
skill iconGo Programming (Golang)
+19 more

Designation: Specialist - Cloud Service Developer (ABL_SS_600)

Position description:

  • The person would be primary responsible for developing solutions using AWS services. Ex: Fargate, Lambda, ECS, ALB, NLB, S3 etc.
  • Apply advanced troubleshooting techniques to provide Solutions to issues pertaining to Service Availability, Performance, and Resiliency
  • Monitor & Optimize the performance using AWS dashboards and logs
  • Partner with Engineering leaders and peers in delivering technology solutions that meet the business requirements 
  • Work with the cloud team in agile approach and develop cost optimized solutions

 

Primary Responsibilities:

  • Develop solutions using AWS services includiing Fargate, Lambda, ECS, ALB, NLB, S3 etc.

 

Reporting Team

  • Reporting Designation: Head - Big Data Engineering and Cloud Development (ABL_SS_414)
  • Reporting Department: Application Development (2487)

Required Skills:

  • AWS certification would be preferred
  • Good understanding in Monitoring (Cloudwatch, alarms, logs, custom metrics, Trust SNS configuration)
  • Good experience with Fargate, Lambda, ECS, ALB, NLB, S3, Glue, Aurora and other AWS services. 
  • Preferred to have Knowledge on Storage (S3, Life cycle management, Event configuration)
  • Good in data structure, programming in (pyspark / python / golang / Scala)
Read more
Srijan Technologies
at Srijan Technologies
6 recruiters
PriyaSaini
Posted by PriyaSaini
Remote only
3 - 8 yrs
₹5L - ₹12L / yr
skill iconData Analytics
Data modeling
skill iconPython
PySpark
ETL
+3 more

Role Description:

  • You will be part of the data delivery team and will have the opportunity to develop a deep understanding of the domain/function.
  • You will design and drive the work plan for the optimization/automation and standardization of the processes incorporating best practices to achieve efficiency gains.
  • You will run data engineering pipelines, link raw client data with data model, conduct data assessment, perform data quality checks, and transform data using ETL tools.
  • You will perform data transformations, modeling, and validation activities, as well as configure applications to the client context. You will also develop scripts to validate, transform, and load raw data using programming languages such as Python and / or PySpark.
  • In this role, you will determine database structural requirements by analyzing client operations, applications, and programming.
  • You will develop cross-site relationships to enhance idea generation, and manage stakeholders.
  • Lastly, you will collaborate with the team to support ongoing business processes by delivering high-quality end products on-time and perform quality checks wherever required.

Job Requirement:

  • Bachelor’s degree in Engineering or Computer Science; Master’s degree is a plus
  • 3+ years of professional work experience with a reputed analytics firm
  • Expertise in handling large amount of data through Python or PySpark
  • Conduct data assessment, perform data quality checks and transform data using SQL and ETL tools
  • Experience of deploying ETL / data pipelines and workflows in cloud technologies and architecture such as Azure and Amazon Web Services will be valued
  • Comfort with data modelling principles (e.g. database structure, entity relationships, UID etc.) and software development principles (e.g. modularization, testing, refactoring, etc.)
  • A thoughtful and comfortable communicator (verbal and written) with the ability to facilitate discussions and conduct training
  • Strong problem-solving, requirement gathering, and leading.
  • Track record of completing projects successfully on time, within budget and as per scope

Read more
MNC
Bengaluru (Bangalore)
4 - 7 yrs
₹25L - ₹28L / yr
skill iconData Science
Data Scientist
skill iconR Programming
skill iconPython
SQL
  • Banking Domain
  • Assist the team in building Machine learning/AI/Analytics models on open-source stack using Python and the Azure cloud stack.
  • Be part of the internal data science team at fragma data - that provides data science consultation to large organizations such as Banks, e-commerce Cos, Social Media companies etc on their scalable AI/ML needs on the cloud and help build POCs, and develop Production ready solutions.
  • Candidates will be provided with opportunities for training and professional certifications on the job in these areas - Azure Machine learning services, Microsoft Customer Insights, Spark, Chatbots, DataBricks, NoSQL databases etc.
  • Assist the team in conducting AI demos, talks, and workshops occasionally to large audiences of senior stakeholders in the industry.
  • Work on large enterprise scale projects end-to-end, involving domain specific projects across banking, finance, ecommerce, social media etc.
  • Keen interest to learn new technologies and latest developments and apply them to projects assigned.
Desired Skills
  • Professional Hands-on coding experience in python for over 1 year for Data scientist, and over 3 years for Sr Data Scientist. 
  • This is primarily a programming/development-oriented role - hence strong programming skills in writing object-oriented and modular code in python and experience of pushing projects to production is important.
  • Strong foundational knowledge and professional experience in 
  • Machine learning, (Compulsory)
  • Deep Learning (Compulsory)
  • Strong knowledge of At least One of : Natural Language Processing or Computer Vision or Speech Processing or Business Analytics
  • Understanding of Database technologies and SQL. (Compulsory)
  • Knowledge of the following Frameworks:
  • Scikit-learn (Compulsory)
  • Keras/tensorflow/pytorch (At least one of these is Compulsory)
  • API development in python for ML models (good to have)
  • Excellent communication skills.
  • Excellent communication skills are necessary to succeed in this role, as this is a role with high external visibility, and with multiple opportunities to present data science results to a large external audience that will include external VPs, Directors, CXOs etc.  
  • Hence communication skills will be a key consideration in the selection process.
Read more
Simplilearn Solutions
at Simplilearn Solutions
1 video
36 recruiters
Aniket Manhar Nanjee
Posted by Aniket Manhar Nanjee
Bengaluru (Bangalore)
2 - 5 yrs
₹6L - ₹10L / yr
skill iconData Science
skill iconR Programming
skill iconPython
skill iconScala
Tableau
+1 more
Simplilearn.com is the world’s largest professional certifications company and an Onalytica Top 20 influential brand. With a library of 400+ courses, we've helped 500,000+ professionals advance their careers, delivering $5 billion in pay raises. Simplilearn has over 6500 employees worldwide and our customers include Fortune 1000 companies, top universities, leading agencies and hundreds of thousands of working professionals. We are growing over 200% year on year and having fun doing it. Description We are looking for candidates with strong technical skills and proven track record in building predictive solutions for enterprises. This is a very challenging role and provides an opportunity to work on developing insights based Ed-Tech software products used by large set of customers across globe. It provides an exciting opportunity to work across various advanced analytics & data science problem statement using cutting-edge modern technologies collaborating with product, marketing & sales teams. Responsibilities • Work on enterprise level advanced reporting requirements & data analysis. • Solve various data science problems customer engagement, dynamic pricing, lead scoring, NPS improvement, optimization, chatbots etc. • Work on data engineering problems utilizing our tech stack - S3 Datalake, Spark, Redshift, Presto, Druid, Airflow etc. • Collect relevant data from source systems/Use crawling and parsing infrastructure to put together data sets. • Craft, conduct and analyse A/B experiments to evaluate machine learning models/algorithms. • Communicate findings and take algorithms/models to production with ownership. Desired Skills • BE/BTech/MSc/MS in Computer Science or related technical field. • 2-5 years of experience in advanced analytics discipline with solid data engineering & visualization skills. • Strong SQL skills and BI skills using Tableau & ability to perform various complex analytics in data. • Ability to propose hypothesis and design experiments in the context of specific problems using statistics & ML algorithms. • Good overlap with Modern Data processing framework such as AWS-lambda, Spark using Scala or Python. • Dedication and diligence in understanding the application domain, collecting/cleaning data and conducting various A/B experiments. • Bachelor Degree in Statistics or, prior experience with Ed-Tech is a plus
Read more
YourHRfolks
at YourHRfolks
6 recruiters
Pranit Visiyait
Posted by Pranit Visiyait
Jaipur
4 - 6 yrs
₹13L - ₹16L / yr
skill iconPython
SQL
MySQL
Data Visualization
R
+3 more

About Us
Punchh is the leader in customer loyalty, offer management, and AI solutions for offline and omni-channel merchants including restaurants, convenience stores, and retailers. Punchh brings the power of online to physical brands by delivering omni-channel experiences and personalization across the entire customer journey--from acquisition through loyalty and growth--to drive same store sales and customer lifetime value. Punchh uses best-in-class integrations to POS and other in-store systems such as WiFi, to deliver real-time SKU-level transaction visibility and offer provisioning for physical stores.


Punchh is growing exponentially, serves 200+ brands that encompass 91K+ stores globally.  Punchh’s customers include the top convenience stores such as Casey’s General Stores, 25+ of the top 100 restaurant brands such as Papa John's, Little Caesars, Denny’s, Focus Brands (5 of 7 brands), and Yum! Brands (KFC, Pizza Hut, and Taco Bell), and retailers.  For a multi-billion $ brand with 6K+ stores, Punchh drove a 3% lift in same-store sales within the first year.  Punchh is powering loyalty programs for 135+ million consumers. 

Punchh has raised $70 million from premier Silicon Valley investors including Sapphire Ventures and Adam Street Partners, has a seasoned leadership team with extensive experience in digital, marketing, CRM, and AI technologies as well as deep restaurant and retail industry expertise.


About the Role: 

Punchh Tech India Pvt. is looking for a Senior Data Analyst – Business Insights to join our team. If you're excited to be part of a winning team, Punchh is a great place to grow your career.

This position is responsible for discovering the important trends among the complex data generated on Punchh platform, that have high business impact (influencing product features and roadmap). Creating hypotheses around these trends, validate them with statistical significance and make recommendations


Reporting to: Director, Analytics

Job Location: Jaipur

Experience Required: 4-6 years


What You’ll Do

  • Take ownership of custom data analysis projects/requests and work closely with end users (both internal and external clients) to deliver the results
  • Identify successful implementation/utilization of product features and contribute to the best-practices playbook for client facing teams (Customer Success)
  • Strive towards building mini business intelligence products that add value to the client base
  • Represent the company’s expertise in advanced analytics in a variety of media outlets such as client interactions, conferences, blogs, and interviews.

What You’ll Need

  • Masters in business/behavioral economics/statistics with a strong interest in marketing technology
  • Proven track record of at least 5 years uncovering business insights, especially related to Behavioral Economics and adding value to businesses
  • Proficient in using the proper statistical and econometric approaches to establish the presence and strength of trends in data. Strong statistical knowledge is mandatory.
  • Extensive prior exposure in causal inference studies, based on both longitudinal and latitudinal data.
  • Excellent experience using Python (or R) to analyze data from extremely large or complex data sets
  • Exceptional data querying skills (Snowflake/Redshift, Spark, Presto/Athena, to name a few)
  • Ability to effectively articulate complex ideas in simple and effective presentations to diverse groups of stakeholders.
  • Experience working with a visualization tool (preferably, but not restricted to Tableau)
  • Domain expertise: extensive exposure to retail business, restaurant business or worked on loyalty programs and promotion/campaign effectiveness
  • Should be self-organized and be able to proactively identify problems and propose solutions
  • Gels well within and across teams, work with stakeholders from various functions such as Product, Customer Success, Implementations among others
  • As the stakeholders on business side are based out of US, should be flexible to schedule meetings convenient to the West Coast timings
  • Effective in working autonomously to get things done and taking the initiatives to anticipate needs of executive leadership
  • Able and willing to relocate to Jaipur post pandemic.

Benefits:

  • Medical Coverage, to keep you and your family healthy.
  • Compensation that stacks up with other tech companies in your area.
  • Paid vacation days and holidays to rest and relax.
  • Healthy lunch provided daily to fuel you through your work.
  • Opportunities for career growth and training support, including fun team building events.
  • Flexibility and a comfortable work environment for you to feel your best.
 
Read more
Bengaluru (Bangalore)
8 - 15 yrs
₹15L - ₹30L / yr
Technical Architecture
Big Data
IT Solutioning
skill iconPython
Rest API

Role and Responsibilities

  • Build a low latency serving layer that powers DataWeave's Dashboards, Reports, and Analytics functionality
  • Build robust RESTful APIs that serve data and insights to DataWeave and other products
  • Design user interaction workflows on our products and integrating them with data APIs
  • Help stabilize and scale our existing systems. Help design the next generation systems.
  • Scale our back end data and analytics pipeline to handle increasingly large amounts of data.
  • Work closely with the Head of Products and UX designers to understand the product vision and design philosophy
  • Lead/be a part of all major tech decisions. Bring in best practices. Mentor younger team members and interns.
  • Constantly think scale, think automation. Measure everything. Optimize proactively.
  • Be a tech thought leader. Add passion and vibrance to the team. Push the envelope.

 

Skills and Requirements

  • 8- 15 years of experience building and scaling APIs and web applications.
  • Experience building and managing large scale data/analytics systems.
  • Have a strong grasp of CS fundamentals and excellent problem solving abilities. Have a good understanding of software design principles and architectural best practices.
  • Be passionate about writing code and have experience coding in multiple languages, including at least one scripting language, preferably Python.
  • Be able to argue convincingly why feature X of language Y rocks/sucks, or why a certain design decision is right/wrong, and so on.
  • Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
  • Have experience working with multiple storage and indexing technologies such as MySQL, Redis, MongoDB, Cassandra, Elastic.
  • Good knowledge (including internals) of messaging systems such as Kafka and RabbitMQ.
  • Use the command line like a pro. Be proficient in Git and other essential software development tools.
  • Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
  • Exposure to one or more centralized logging, monitoring, and instrumentation tools, such as Kibana, Graylog, StatsD, Datadog etc.
  • Working knowledge of building websites and apps. Good understanding of integration complexities and dependencies.
  • Working knowledge linux server administration as well as the AWS ecosystem is desirable.
  • It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos