Aspirant - Data Science & AI

at Busigence Technologies

DP
Posted by Seema Verma
icon
Bengaluru (Bangalore)
icon
0 - 10 yrs
icon
₹3L - ₹9L / yr
icon
Full time
Skills
Data Science
Big Data
Machine Learning (ML)
Statistical Analysis
Deep Learning
Python
TensorFlow
Analytics
APPLY LINK: http://bit.ly/2yipqSE Go through the entire job post thoroughly before pressing Apply. There is an eleven characters french word v*n*i*r*t*e mentioned somewhere in the whole text which is irrelevant to the context. You shall be required to enter this word while applying else application won't be considered submitted. ````````````````````````````````````````````````````````````````````````````````````````````````````` Aspirant - Data Science & AI Team: Sciences Full-Time, Trainee Bangaluru, India Relevant Exp: 0 - 10 Years Background: Top Tier institute Compensation: Above Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Busigence is a Decision Intelligence Company. We create decision intelligence products for real people by combining data, technology, business, and behavior enabling strengthened decisions. Scaling established startup by IIT alumni innovating & disrupting marketing domain through artificial intelligence. We bring those people onboard who are dedicated to deliver wisdom to humanity by solving the world’s most pressing problems differently thereby significantly impacting thousands of souls, everyday. We are a deep rooted organization with six years of success story having worked with folks from top tier background (IIT, NSIT, DCE, BITS, IIITs, NITs, IIMs, ISI etc.) maintaining an awesome culture with a common vision to build great data products. In past we have served fifty five customers and presently developing our second product, Robonate. First was emmoQ - an emotion intelligence platform. Third offering, H2HData, an innovation lab where we solve hard problems through data, science, & design. We work extensively & intensely on big data, data science, machine learning, deep learning, reinforcement learning, data analytics, natural language processing, cognitive computing, and business intelligence. First-and-Foremost Before you dive-in exploring this opportunity and press Apply, we wish you to evaluate yourself - We are looking for right candidate, not the best candidate. We love to work with someone who can mandatorily gel with our vision, beliefs, thoughts, methods, and values --- which are aligned with what can be expected in a true startup with ambitious goals. Skills are always secondary to us. Primarily, you must be someone who is not essentially looking for a job or career, rather starving for a challenge, you yourself probably don't know since when. A book can be written on what an applicant must have before joining a . For brevity, in nutshell, we need these three in you: 1. You must be [super sharp] (Just an analogue, but Irodov, Mensa, Feynman, Polya, ACM, NIPS, ICAAC, BattleCode, DOTA etc should have been your Done stuff. Can you relate solution 1 to problem 2? or Do you get confused even when solved similar problem in past? Are you able to grasp problem statement in one go? or get hanged?) 2. You must be [extremely energetic] (Do you raise eyebrows when asked to stretch your limits, both in terms of complexity or extra hours to put in? What comes first in your mind, let's finish it today or this can be done tomorrow too? Its Friday 10 PM at work -Tired?) 3. You must be [honourably honest] (Do you tell others what you think, or what they want to hear? Later is good for sales team for their customers, not for this role. Are you honest with your work? intrinsically with yourself first?) You know yourself the best. If not ask your loved ones and then decide. We clearly need exceedingly motivated people with entrepreneurial traits, not employee mindset - not at all. This is an immediate requirement. We shall have an accelerated interview process for fast closure - you would be required to be proactive and responsive. Real ROLE We are looking for students, graduates, and experienced folks with real passion for algorithms, computing, and analysis. You would be required to work with our sciences team on complex cases from data science, machine learning, and business analytics. Mandatory R1. Must know in-and-out of functional programming (https://docs.python.org/2/howto/functional.html) in Python with strong flair for data structures, linear algebra, & algorithms implementation. Only oops cannot not be accepted. R2. Must have soiled hands on methods, functions, and workarounds in NumPy, Pandas, Scikit-learn, SciPy, Stasmodels - collectively you should have implemented atleast 100 different techniques (we averaged out this figure with our past aspirants who have worked on this role) R3. Must have implemented complex mathematical logics through functional map-reduce framework in Python R4. Must have understanding on EDA cycle, machine learning algorithms, hyper-parameter optimization, ensemble learning, regularization, predictions, clustering, associations - at essential level R5. Must have solved atleast five problems through data science & machine learning. Mere coursera learning and/or Kaggle offline attempts shall not be accepted Preferred R6. Good to have required callibre to learn PySpark within four weeks once joined us R7. Good to have required callibre to grasp underlying business for a problem to be solved R8. Good to have understanding on CNNs, RNNs, MLP, Auto-Encoders - at basic level R9. Good to have solved atleast three problems through deep learning. Mere coursera learning and/or Kaggle offline attempts shall not be accepted R10. Good to have worked on pre-processing techniques for images, audio, and text - OpenCV, Librosa, NLTK R11. Good to have used pre-trained models - VGGNET, Inception, ResNet, WaveNet, Word2Vec Ideal YOU Y1. Degree in engineering, or any other data-heavy field at Bachelors level or above from a top tier institute Y2. Relevant experience of 0 - 10 years working on real-world problems in a reputed company or a proven startup Y3. You are a fanatical implementer who love to spend time with content, codes & workarounds, more than your loved ones Y4. You are true believer that human intelligence can be augmented through computer science & mathematics and your survival vinaigrette depends on getting the most from the data Y5. You are an entrepreneur mindset with ownership, intellectuality, & creativity as way to work. These are not fancy words, we mean it Actual WE W1. Real startup with Meaningful products W2. Revolutionary not just disruptive W3. Rules creators not followers W4. Small teams with real brains not herd of blockheads W5. Completely trust us and should be trusted back Why Us In addition to the regular stuff which every good startup offers – Lots of learning, Food, Parties, Open culture, Flexible working hours, and what not…. We offer you: You shall be working on our revolutionary products which are pioneer in their respective categories. This is a fact. We try real hard to hire fun loving crazy folks who are driven by more than a paycheck. You shall be working with creamiest talent on extremely challenging problems at most happening workplace. How to Apply You should apply online by clicking "Apply Now". For queries regarding an open position, please write to [email protected] For more information, visit http://www.busigence.com Careers: http://careers.busigence.com Research: http://research.busigence.com Jobs: http://careers.busigence.com/jobs/data-science Feel right fit for the position, mandatorily attach PDF resume highlighting your A. Key Skills B. Knowledge Inputs C. Major Accomplishments D. Problems Solved E. Submissions – Github/ StackOverflow/ Kaggle/ Euler Project etc. (if applicable) If you don't see this open position that interests you, join our Talent Pool and let us know how you can make a difference here. Referrals are more than welcome. Keep us in loop.

About Busigence Technologies

Busigence is a Decision Intelligence Company. We create decision intelligence products for real people by combining data, technology, business, and behavior enabling strengthened decisions.

 

Scaling established startup by IIT alumni innovating & disrupting marketing domain through artificial intelligence. We bring those people onboard who are dedicated to deliver wisdom to humanity by solving the world's most pressing problems differently thereby significantly impacting thousands of souls, everyday.

 

We are a deep rooted organization with six years of success story having worked with folks from top tier background (IIT, NSIT, DCE, BITS, IIITs, NITs, IIMs, ISI etc.) maintaining an awesome culture with a common vision to build great data products.

 

In past we have served fifty five customers and presently developing our second product, Robonate . First was emmoQ - an emotion intelligence platform. Third offering, H2HData , an innovation lab where we solve hard problems through data, science, & design.

 

We work extensively & intensely on big data, data science, machine learning, deep learning, reinforcement learning, data analytics, natural language processing, cognitive computing, and business intelligence.

 

We try real hard to hire fun loving crazy folks who are driven by more than a paycheck. You shall be working with creamiest talent on extremely challenging problems at most happening workplace.

 

Our mission is to make the world decision intelligent. We envision to have worked on atleast 1% of worlds data by 2020.

 

Why Explore a Career at Busigence

This section should have been entitled - Why Explore a Challenge at Busigence. What?

Busigence is not for everyone. This is the strongest differentiator. How?

Skills are secondary for us. We believe in intentions, not capabilities. Why?

------------------------------------------------------------------------------------------------------------------------

If the above three are not understood and/or interest you, we would encourage you refrain applying to us. You won't be actually able to work with us.

 

80-85% candidates looks for a job in an open position. 85-98% forsee a career in it. Hardly 2% are able to realise it as a challenge, which may satisfy their soul.

 

We look for these 2%. PERIOD.

 

If you happen to be fortunately falling in this group then world is too small for you. Reason being there are very few organisations which can really meet your expectations.

 

We can! Busigence works on real hard problems. Solving customer's problem is our passion.

 

1. We do where world is moving. Artificial Intelligence. Real AI.

2. We are a real startup culture (this is not bean bags or open office or flexible hours. It is a spirit to create something which doesn't exist).

3. We hire creme de la creme. Coincidentally, it happens to be candidates from topmost tier (IIT & equivalent). People enjoy working with like minded people.

4. You will be empowered everyday and make you feel an entrepreneurial trait in you.

5. This will be Greatest Work of Your Life. Promise!

 

 

Busigence Interview Process

We don't hire, we handpick - Believe with us. Laugh with us. Work with us.

 

For formality,

 

Step 1 : Apply iff you meet Real Role's & Ideal You's

Step 2 : Round 0: First call followed by Application Form

Step 3 : Round 1: Technology/ Process/ Business Capability Evaluation

Step 4 : Round 2: Day Spent (1 or 2 days) working with us

 

Enough. We are Done.

Founded
2012
Type
Product
Size
20-100 employees
Stage
Bootstrapped
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Sr. Data Scientist (Global Media Agency)

at Global Media Agency - A client of Merito

Agency job
via Merito
Machine Learning (ML)
Data Science
media analytics
SQL
Python
MySQL
PostgreSQL
Business Intelligence (BI)
Tableau
icon
Gurugram, Bengaluru (Bangalore), Mumbai
icon
4 - 9 yrs
icon
Best in industry

Our client combines Adtech and Martech platform strategy with data science & data engineering expertise, helping our clients make advertising work better for people.

 
Key Role:
  • Act as primary day-to-day contact on analytics to agency-client leads
  • Develop bespoke analytics proposals for presentation to agencies & clients, for delivery within the teams
  • Ensure delivery of projects and services across the analytics team meets our stakeholder requirements (time, quality, cost)
  • Hands on platforms to perform data pre-processing that involves data transformation as well as data cleaning
  • Ensure data quality and integrity
  • Interpret and analyse data problems
  • Build analytic systems and predictive models
  • Increasing the performance and accuracy of machine learning algorithms through fine-tuning and further
  • Visualize data and create reports
  • Experiment with new models and techniques
  • Align data projects with organizational goals


Requirements

  • Min 6 - 7 years’ experience working in Data Science
  • Prior experience as a Data Scientist within a digital media is desirable
  • Solid understanding of machine learning
  • A degree in a quantitative field (e.g. economics, computer science, mathematics, statistics, engineering, physics, etc.)
  • Experience with SQL/ Big Query/GMP tech stack / Clean rooms such as ADH
  • A knack for statistical analysis and predictive modelling
  • Good knowledge of R, Python
  • Experience with SQL, MYSQL, PostgreSQL databases
  • Knowledge of data management and visualization techniques
  • Hands-on experience on BI/Visual Analytics Tools like PowerBI or Tableau or Data Studio
  • Evidence of technical comfort and good understanding of internet functionality desirable
  • Analytical pedigree - evidence of having approached problems from a mathematical perspective and working through to a solution in a logical way
  • Proactive and results-oriented
  • A positive, can-do attitude with a thirst to continually learn new things
  • An ability to work independently and collaboratively with a wide range of teams
  • Excellent communication skills, both written and oral
Job posted by
Merito Talent

Data Scientist

at Dori AI

Founded 2018  •  Products & Services  •  20-100 employees  •  Raised funding
Data Science
Machine Learning (ML)
Computer Vision
Python
recommendation algorithm
icon
Bengaluru (Bangalore)
icon
2 - 8 yrs
icon
₹1L - ₹20L / yr
Dori AI enables enterprises with AI-powered video analytics to significantly increase human productivity and improve process compliance. We leverage a proprietary full stack end-to-end computer vision and deep learning platform to rapidly build and deploy AI solutions for enterprises. The platform was built with enterprise considerations including time-to-value, time-to-market, security and scalability across a range of use cases. Capture visual data across multiple sites, leverage AI + Computer Vision to gather key insights, and make decisions with actionable visual insights. Launch CV applications in a matter of weeks that are optimized for both cloud and edge deployments.

Job brief
We are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights.

In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research.

Your goal will be to help our company analyze trends to make better decisions.

Requirements
1. 2 to 4 years of relevant industry experience
2. Experience in Linear algebra, statistics & Probability skills, such as distributions, Deep Learning, Machine Learning
3. Strong mathematical and statistics background is a must
4. Experience in machine learning frameworks such as Tensorflow, Caffe, PyTorch, or MxNet
5. Strong industry experience in using design patterns, algorithms and data structures
6. Industry experience in using feature engineering, model performance tuning, and optimizing machine learning models
7. Hands on development experience in Python and packages such as NumPy, Sci-Kit Learn and Matplotlib
8. Experience in model building, hyper
Job posted by
Pravin JS

Backend Data Engineer

at India's best Short Video App

Agency job
via wrackle
Data engineering
Big Data
Spark
Apache Kafka
Apache Hive
Data engineer
Elastic Search
MongoDB
Python
Apache Storm
Druid Database
Apache HBase
Cassandra
DynamoDB
Memcached
Proxies
HDFS
Pig
Scribe
Apache ZooKeeper
Agile/Scrum
Roadmaps
DevOps
Software Testing (QA)
Data Warehouse (DWH)
flink
aws kinesis
presto
airflow
caches
data pipeline
icon
Bengaluru (Bangalore)
icon
4 - 12 yrs
icon
₹25L - ₹50L / yr
What Makes You a Great Fit for The Role?

You’re awesome at and will be responsible for
 
Extensive programming experience with cross-platform development of one of the following Java/SpringBoot, Javascript/Node.js, Express.js or Python
3-4 years of experience in big data analytics technologies like Storm, Spark/Spark streaming, Flink, AWS Kinesis, Kafka streaming, Hive, Druid, Presto, Elasticsearch, Airflow, etc.
3-4 years of experience in building high performance RPC services using different high performance paradigms: multi-threading, multi-processing, asynchronous programming (nonblocking IO), reactive programming,
3-4 years of experience working high throughput low latency databases and cache layers like MongoDB, Hbase, Cassandra, DynamoDB,, Elasticache ( Redis + Memcache )
Experience with designing and building high scale app backends and micro-services leveraging cloud native services on AWS like proxies, caches, CDNs, messaging systems, Serverless compute(e.g. lambda), monitoring and telemetry.
Strong understanding of distributed systems fundamentals around scalability, elasticity, availability, fault-tolerance.
Experience in analysing and improving the efficiency, scalability, and stability of distributed systems and backend micro services.
5-7 years of strong design/development experience in building massively large scale, high throughput low latency distributed internet systems and products.
Good experience in working with Hadoop and Big Data technologies like HDFS, Pig, Hive, Storm, HBase, Scribe, Zookeeper and NoSQL systems etc.
Agile methodologies, Sprint management, Roadmap, Mentoring, Documenting, Software architecture.
Liaison with Product Management, DevOps, QA, Client and other teams
 
Your Experience Across The Years in the Roles You’ve Played
 
Have total or more 5 - 7 years of experience with 2-3 years in a startup.
Have B.Tech or M.Tech or equivalent academic qualification from premier institute.
Experience in Product companies working on Internet-scale applications is preferred
Thoroughly aware of cloud computing infrastructure on AWS leveraging cloud native service and infrastructure services to design solutions.
Follow Cloud Native Computing Foundation leveraging mature open source projects including understanding of containerisation/Kubernetes.
 
You are passionate about learning or growing your expertise in some or all of the following
Data Pipelines
Data Warehousing
Statistics
Metrics Development
 
We Value Engineers Who Are
 
Customer-focused: We believe that doing what’s right for the creator is ultimately what will drive our business forward.
Obsessed with Quality: Your Production code just works & scales linearly
Team players. You believe that more can be achieved together. You listen to feedback and also provide supportive feedback to help others grow/improve.
Pragmatic: We do things quickly to learn what our creators desire. You know when it’s appropriate to take shortcuts that don’t sacrifice quality or maintainability.
Owners: Engineers at Chingari know how to positively impact the business.
Job posted by
Naveen Taalanki

Senior Data Engineer

at Velocity.in

Founded 2019  •  Product  •  20-100 employees  •  Raised funding
ETL
Informatica
Data Warehouse (DWH)
Data engineering
Oracle
PostgreSQL
DevOps
Amazon Web Services (AWS)
NodeJS (Node.js)
Ruby on Rails (ROR)
React.js
Python
icon
Bengaluru (Bangalore)
icon
4 - 9 yrs
icon
₹15L - ₹35L / yr

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 5+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 5+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

Job posted by
Newali Hazarika

Data Engineer

at CustomerGlu

Founded 2016  •  Products & Services  •  20-100 employees  •  Raised funding
Data engineering
Data Engineer
MongoDB
DynamoDB
Apache
Apache Kafka
Hadoop
pandas
NumPy
Python
Machine Learning (ML)
Big Data
API
Data Structures
AWS Lambda
Glue semantics
icon
Bengaluru (Bangalore)
icon
2 - 3 yrs
icon
₹8L - ₹12L / yr

CustomerGlu is a low code interactive user engagement platform. We're backed by Techstars and top-notch VCs from the US like Better Capital and SmartStart.

As we begin building repeatability in our core product offering at CustomerGlu - building a high-quality data infrastructure/applications is emerging as a key requirement to further drive more ROI from our interactive engagement programs and to also get ideas for new campaigns.

Hence we are adding more team members to our existing data team and looking for a Data Engineer.

Responsibilities

  • Design and build a high-performing data platform that is responsible for the extraction, transformation, and loading of data.
  • Develop low-latency real-time data analytics and segmentation applications.
  • Setup infrastructure for easily building data products on top of the data platform.
  • Be responsible for logging, monitoring, and error recovery of data pipelines.
  • Build workflows for automated scheduling of data transformation processes.
  • Able to lead a team

Requirements

  • 3+ years of experience and ability to manage a team
  • Experience working with databases like MongoDB and DynamoDB.
  • Knowledge of building batch data processing applications using Apache Spark.
  • Understanding of how backend services like HTTP APIs and Queues work.
  • Write good quality, maintainable code in one or more programming languages like Python, Scala, and Java.
  • Working knowledge of version control systems like Git.

Bonus Skills

  • Experience in real-time data processing using Apache Kafka or AWS Kinesis.
  • Experience with AWS tools like Lambda and Glue.
Job posted by
Barkha Budhori

Data Scientist

at Symansys Technologies India Pvt Ltd

Founded 2014  •  Products & Services  •  employees  •  Profitable
Data Science
Machine Learning (ML)
Python
Tableau
R Programming
SQL server
icon
Pune, Mumbai
icon
2 - 8 yrs
icon
₹5L - ₹15L / yr

Specialism- Advance Analytics, Data Science, regression, forecasting, analytics, SQL, R, python, decision tree, random forest, SAS, clustering classification

Senior Analytics Consultant- Responsibilities

  • Understand business problem and requirements by building domain knowledge and translate to data science problem
  • Conceptualize and design cutting edge data science solution to solve the data science problem, apply design thinking concepts
  • Identify the right algorithms , tech stack , sample outputs required to efficiently adder the end need
  • Prototype and experiment the solution to successfully demonstrate the value
    Independently or with support from team execute the conceptualized solution as per plan by following project management guidelines
  • Present the results to internal and client stakeholder in an easy to understand manner with great story telling, story boarding, insights and visualization
  • Help build overall data science capability for eClerx through support in pilots, pre sales pitches, product development , practice development initiatives
Job posted by
Tanu Chauhan

Product Solutions Engineer

at Digital & Fintech based companies

Agency job
via Agile Hire
Python
Bash
MySQL
Elastic Search
Amazon Web Services (AWS)
icon
Bengaluru (Bangalore)
icon
2 - 4 yrs
icon
₹12L - ₹16L / yr

What are we looking for:

 

  1. Strong experience in MySQL and writing advanced queries
  2. Strong experience in Bash and Python
  3. Familiarity with ElasticSearch, Redis, Java, NodeJS, ClickHouse, S3
  4. Exposure to cloud services such as AWS, Azure, or GCP
  5. 2+ years of experience in the production support
  6. Strong experience in log management and performance monitoring like ELK, Prometheus + Grafana, logging services on various cloud platforms
  7. Strong understanding of Linux OSes like Ubuntu, CentOS / Redhat Linux
  8. Interest in learning new languages / framework as needed
  9. Good written and oral communications skills
  10. A growth mindset and passionate about building things from the ground up, and most importantly, you should be fun to work with

 

As a product solutions engineer, you will:

 

  1. Analyze recorded runtime issues, diagnose and do occasional code fixes of low to medium complexity
  2. Work with developers to find and correct more complex issues
  3. Address urgent issues quickly, work within and measure against customer SLAs
  4. Using shell and python scripts, and use scripting to actively automate manual / repetitive activities
  5. Build anomaly detectors wherever applicable
  6. Pass articulated feedback from customers to the development and product team
  7. Maintain ongoing record of the operation of problem analysis and resolution in a on call monitoring system
  8. Offer technical support needed in development

 

Job posted by
Siddhi .

Software Engineer ( Big data)

at It's a OTT platform

Agency job
via Vmultiply solutions
Big Data
Apache Kafka
Kibana
Elastic Search
Logstash
icon
Hyderabad
icon
6 - 8 yrs
icon
₹8L - ₹15L / yr
Passionate data engineer with ability to manage data coming from different sources.
Should design and operate data pipe lines.
Build and manage analytics platform using Elastic search, Redshift, Mongo db.
Strong programming fundamentals in Datastructures and algorithms.
Job posted by
HR Lakshmi

Data Scientist

at Episource LLC

Founded 2008  •  Product  •  500-1000 employees  •  Profitable
Python
Machine Learning (ML)
Data Science
Amazon Web Services (AWS)
Apache Spark
Natural Language Processing (NLP)
icon
Mumbai
icon
4 - 8 yrs
icon
₹12L - ₹20L / yr

We’re looking to hire someone to help scale Machine Learning and NLP efforts at Episource. You’ll work with the team that develops the models powering Episource’s product focused on NLP driven medical coding. Some of the problems include improving our ICD code recommendations , clinical named entity recognition and information extraction from clinical notes.


This is a role for highly technical machine learning & data engineers who combine outstanding oral and written communication skills, and the ability to code up prototypes and productionalize using a large range of tools, algorithms, and languages. Most importantly they need to have the ability to autonomously plan and organize their work assignments based on high-level team goals.


You will be responsible for setting an agenda to develop and ship machine learning models that positively impact the business, working with partners across the company including operations and engineering. You will use research results to shape strategy for the company, and help build a foundation of tools and practices used by quantitative staff across the company.



What you will achieve:

  • Define the research vision for data science, and oversee planning, staffing, and prioritization to make sure the team is advancing that roadmap

  • Invest in your team’s skills, tools, and processes to improve their velocity, including working with engineering counterparts to shape the roadmap for machine learning needs

  • Hire, retain, and develop talented and diverse staff through ownership of our data science hiring processes, brand, and functional leadership of data scientists

  • Evangelise machine learning and AI internally and externally, including attending conferences and being a thought leader in the space

  • Partner with the executive team and other business leaders to deliver cross-functional research work and models






Required Skills:


  • Strong background in classical machine learning and machine learning deployments is a must and preferably with 4-8 years of experience

  • Knowledge of deep learning & NLP

  • Hands-on experience in TensorFlow/PyTorch, Scikit-Learn, Python, Apache Spark & Big Data platforms to manipulate large-scale structured and unstructured datasets.

  • Experience with GPU computing is a plus.

  • Professional experience as a data science leader, setting the vision for how to most effectively use data in your organization. This could be through technical leadership with ownership over a research agenda, or developing a team as a personnel manager in a new area at a larger company.

  • Expert-level experience with a wide range of quantitative methods that can be applied to business problems.

  • Evidence you’ve successfully been able to scope, deliver and sell your own research in a way that shifts the agenda of a large organization.

  • Excellent written and verbal communication skills on quantitative topics for a variety of audiences: product managers, designers, engineers, and business leaders.

  • Fluent in data fundamentals: SQL, data manipulation using a procedural language, statistics, experimentation, and modeling


Qualifications

  • Professional experience as a data science leader, setting the vision for how to most effectively use data in your organization

  • Expert-level experience with machine learning that can be applied to business problems

  • Evidence you’ve successfully been able to scope, deliver and sell your own work in a way that shifts the agenda of a large organization

  • Fluent in data fundamentals: SQL, data manipulation using a procedural language, statistics, experimentation, and modeling

  • Degree in a field that has very applicable use of data science / statistics techniques (e.g. statistics, applied math, computer science, OR a science field with direct statistics application)

  • 5+ years of industry experience in data science and machine learning, preferably at a software product company

  • 3+ years of experience managing data science teams, incl. managing/grooming managers beneath you

  • 3+ years of experience partnering with executive staff on data topics

Job posted by
Manas Ranjan Kar

Data Engineer

at Codalyze Technologies

Founded 2016  •  Products & Services  •  20-100 employees  •  Profitable
Hadoop
Big Data
Scala
Spark
Amazon Web Services (AWS)
Java
Python
Apache Hive
icon
Mumbai
icon
3 - 7 yrs
icon
₹7L - ₹20L / yr
Job Overview :

Your mission is to help lead team towards creating solutions that improve the way our business is run. Your knowledge of design, development, coding, testing and application programming will help your team raise their game, meeting your standards, as well as satisfying both business and functional requirements. Your expertise in various technology domains will be counted on to set strategic direction and solve complex and mission critical problems, internally and externally. Your quest to embracing leading-edge technologies and methodologies inspires your team to follow suit.

Responsibilities and Duties :

- As a Data Engineer you will be responsible for the development of data pipelines for numerous applications handling all kinds of data like structured, semi-structured &
unstructured. Having big data knowledge specially in Spark & Hive is highly preferred.

- Work in team and provide proactive technical oversight, advice development teams fostering re-use, design for scale, stability, and operational efficiency of data/analytical solutions

Education level :

- Bachelor's degree in Computer Science or equivalent

Experience :

- Minimum 5+ years relevant experience working on production grade projects experience in hands on, end to end software development

- Expertise in application, data and infrastructure architecture disciplines

- Expert designing data integrations using ETL and other data integration patterns

- Advanced knowledge of architecture, design and business processes

Proficiency in :

- Modern programming languages like Java, Python, Scala

- Big Data technologies Hadoop, Spark, HIVE, Kafka

- Writing decently optimized SQL queries

- Orchestration and deployment tools like Airflow & Jenkins for CI/CD (Optional)

- Responsible for design and development of integration solutions with Hadoop/HDFS, Real-Time Systems, Data Warehouses, and Analytics solutions

- Knowledge of system development lifecycle methodologies, such as waterfall and AGILE.

- An understanding of data architecture and modeling practices and concepts including entity-relationship diagrams, normalization, abstraction, denormalization, dimensional
modeling, and Meta data modeling practices.

- Experience generating physical data models and the associated DDL from logical data models.

- Experience developing data models for operational, transactional, and operational reporting, including the development of or interfacing with data analysis, data mapping,
and data rationalization artifacts.

- Experience enforcing data modeling standards and procedures.

- Knowledge of web technologies, application programming languages, OLTP/OLAP technologies, data strategy disciplines, relational databases, data warehouse development and Big Data solutions.

- Ability to work collaboratively in teams and develop meaningful relationships to achieve common goals

Skills :

Must Know :

- Core big-data concepts

- Spark - PySpark/Scala

- Data integration tool like Pentaho, Nifi, SSIS, etc (at least 1)

- Handling of various file formats

- Cloud platform - AWS/Azure/GCP

- Orchestration tool - Airflow
Job posted by
Aishwarya Hire
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Busigence Technologies?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort