Data Scientist

at Searce Inc

DP
Posted by Mishita Juneja
icon
Pune, Bengaluru (Bangalore)
icon
3 - 5 yrs
icon
₹10L - ₹15L / yr
icon
Full time
Skills
Data Science
R Programming
Python
Deep Learning
Hadoop
Neural networks
Natural Language Processing (NLP)
Machine Learning (ML)
Computer Vision
TensorFlow
PyTorch
Artificial Intelligence (AI)

Data Scientist - Applied AI


Who we are?

Searce is  a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud.


What do we believe?

  • Best practices are overrated
      • Implementing best practices can only make one an average .
  • Honesty and Transparency
      • We believe in naked truth. We do what we tell and tell what we do.
  • Client Partnership
    • Client - Vendor relationship: No. We partner with clients instead. 
    • And our sales team comprises 100% of our clients.

How do we work ?

It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER.

  • Humble: Happy people don’t carry ego around. We listen to understand; not to respond.
  • Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about.
  • Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it.
  • Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver.
  • Innovative: Innovate or Die. We love to challenge the status quo.
  • Experimental: We encourage curiosity & making mistakes.
  • Responsible: Driven. Self motivated. Self governing teams. We own it.

So, what are we hunting for ?

As a Data Scientist, you will help develop and enhance the algorithms and technology that powers our unique system. This role covers a wide range of challenges,

from developing new models using pre-existing components to enable current systems to

be more intelligent. You should be able to train models using existing data and use them in

most creative manner to deliver the smartest experience to customers. You will have to

develop multiple AI applications that push the threshold of intelligence in machines.


Working on multiple projects at a time, you will have to maintain a consistently high level of attention to detail while finding creative ways to provide analytical insights. You will also have to thrive in a fast, high-energy environment and should be able to balance multiple projects in real-time. The thrill of the next big challenge should drive you, and when faced with an obstacle, you should be able to find clever solutions.You must have the ability and interest to work on a range of different types of projects and business processes, and must have a background that demonstrates this ability.






Your bucket of Undertakings :

  1. Collaborate with team members to develop new models to be used for classification problems
  2. Work on software profiling, performance tuning and analysis, and other general software engineering tasks
  3. Use independent judgment to take existing data and build new models from it
  4. Collaborate and provide technical guidance and come up with new ideas,rapid prototyping and converting prototypes into scalable products
  5. Conduct experiments to assess the accuracy and recall of language processing modules and to study the effect of such experiences
  6. Lead AI R&D initiatives to include prototypes and minimum viable products
  7. Work closely with multiple teams on projects like Visual quality inspection, ML Ops, Conversational banking, Demand forecasting, Anomaly detection etc. 
  8. Build reusable and scalable solutions for use across the customer base
  9. Prototype and demonstrate AI related products and solutions for customers
  10. Assist business development teams in the expansion and enhancement of a pipeline to support short- and long-range growth plans
  11. Identify new business opportunities and prioritize pursuits for AI 
  12. Participate in long range strategic planning activities designed to meet the Company’s objectives and to increase its enterprise value and revenue goals

Education & Experience : 

  1. BE/B.Tech/Masters in a quantitative field such as CS, EE, Information sciences, Statistics, Mathematics, Economics, Operations Research, or related, with focus on applied and foundational Machine Learning , AI , NLP and/or / data-driven statistical analysis & modelling
  2. 3+ years of Experience majorly in applying AI/ML/ NLP / deep learning / data-driven statistical analysis & modelling solutions to multiple domains, including financial engineering, financial processes a plus
  3. Strong, proven programming skills and with machine learning and deep learning and Big data frameworks including TensorFlow, Caffe, Spark, Hadoop. Experience with writing complex programs and implementing custom algorithms in these and other environments
  4. Experience beyond using open source tools as-is, and writing custom code on top of, or in addition to, existing open source frameworks
  5. Proven capability in demonstrating successful advanced technology solutions (either prototypes, POCs, well-cited research publications, and/or products) using ML/AI/NLP/data science in one or more domains
  6. Research and implement novel machine learning and statistical approaches
  7. Experience in data management, data analytics middleware, platforms and infrastructure, cloud and fog computing is a plus
  8. Excellent communication skills (oral and written) to explain complex algorithms, solutions to stakeholders across multiple disciplines, and ability to work in a diverse team

Accomplishment Set: 


  1. Extensive experience with Hadoop and Machine learning algorithms
  2. Exposure to Deep Learning, Neural Networks, or related fields and a strong interest and desire to pursue them
  3. Experience in Natural Language Processing, Computer Vision, Machine Learning or Machine Intelligence (Artificial Intelligence)
  4. Passion for solving NLP problems
  5. Experience with specialized tools and project for working with natural language processing
  6. Knowledge of machine learning frameworks like Tensorflow, Pytorch
  7. Experience with software version control systems like Github
  8. Fast learner and be able to work independently as well as in a team environment with good written and verbal communication skills

About Searce Inc

Searce is a cloud, automation & analytics led process improvement company helping futurify businesses. Searce is a premier partner for Google Cloud for all products and services. Searce is the largest Cloud Systems Integrator for enterprises with the largest # of enterprise Google Cloud clients in India.

 

Searce specializes in helping businesses move to cloud, build on the next generation cloud, adopt SaaS - Helping reimagine the ‘why’ & redefining ‘what’s next’ for workflows, automation, machine learning & related futuristic use cases. Searce has been recognized by Google as one of the Top partners for the year 2015, 2016.

 

Searce's organizational culture encourages making mistakes and questioning the status quo and that allows us to specialize in simplifying complex business processes and use a technology agnostic approach to create, improve and deliver.

 

Founded
2004
Type
Products & Services
Size
100-1000 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

DevOps Engineer

at Panamax InfoTech Ltd.

Founded 2004  •  Products & Services  •  100-1000 employees  •  Profitable
Data Warehouse (DWH)
Informatica
ETL
DevOps
Python
Perl
Java
.NET
Shell Scripting
Bash
Terraform
SVN
Maven
Git
Docker
Kubernetes
Chef
Ansible
Puppet
Splunk
Gradle
Software deployment
helm
icon
Remote only
icon
3 - 4 yrs
icon
₹5L - ₹10L / yr
1. Excellent understanding of at least any one of the programming language .NET ,Python, Perl, and Java
2. Good understanding and hands on experience in Shell/Bash scripting, sonarqube, Terraform,
3. Experience with Continuous Integration and Continuous Deployment Pipelines
4. Experience in SVN, Maven, Git and Git workflows
5. Should be able to develop overall strategy for Build & Release management
6. Experience in working with container orchestration tools such as Docker and Kubernetes
7. Good knowledge in Devops Automation Tools like Chef, Ansible, Puppet, helm, splunk, maven, gradle & XL Deploy.etc
8. Managing stakeholders and external interfaces and Setting up tools and required infrastructure
9. Encouraging and building automated processes wherever possible
10. Awareness of critical concepts in DevOps and Agile principles
11. Experience in Cloud infrastructure like AWS, GCP or Azure. In AWS understanding on EC2, S3 & cloud
12. Strong knowledge and hands on experience in unix OS
13.Experience in network, server, application status monitoring and troubleshooting, Security.
14.Design, develop automation suite and integrate with continuous integration process through Jenkins
15. Possess good problem solving and debugging skills. Troubleshoot issues and coordinate with development team to streamline code deployment to generate build
Job posted by
Bhavani P

Data Engineer

at SenecaGlobal

Founded 2007  •  Products & Services  •  100-1000 employees  •  Profitable
Python
PySpark
Spark
Scala
Microsoft Azure Data factory
icon
Remote, Hyderabad
icon
4 - 6 yrs
icon
₹15L - ₹20L / yr
Should have good experience with Python or Scala/PySpark/Spark/
• Experience with Advanced SQL
• Experience with Azure data factory, data bricks,
• Experience with Azure IOT, Cosmos DB, BLOB Storage
• API management, FHIR API development,
• Proficient with Git and CI/CD best practices
• Experience working with Snowflake is a plus
Job posted by
Shiva V

Data Scientist

at Digital center Technology Solution Company

Agency job
via Ad Astra Consultants
Statistical Analysis
PowerBI
Data Analytics
azureML
Data Science
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹10L - ₹15L / yr

In this role, we are looking for:

  • A problem-solving mindset with the ability to understand business challenges and how to apply your analytics expertise to solve them.
  • The unique person who can present complex mathematical solutions in a simple manner that most will understand, using data visualization techniques to tell a story with data.
  • An individual excited by innovation and new technology and eager to finds ways to employ these innovations in practice.
  • A team mentality, empowered by the ability to work with a diverse set of individuals.
  • A passion for data, with a particular emphasis on data visualization.

 

Basic Qualifications

 

  • A Bachelor’s degree in Data Science, Math, Statistics, Computer Science or related field with an emphasis on data analytics.
  • 5+ Years professional experience, preferably in a data analyst / data scientist role or similar, with proven results in a data analyst role.
  • 3+ Years professional experience in a leadership role guiding high-performing, data-focused teams with a track record of building and developing talent.
  • Proficiency in your statistics / analytics / visualization tool of choice, but preferably in the Microsoft Azure Suite, including PowerBI and/or AzureML.
Job posted by
Pramod P

Data Engineer

at Networking & Cybersecurity Solutions

Agency job
via Multi Recruit
Spark
Apache Kafka
Data Engineer
Hadoop
Big Data
TensorFlow
Flink
"Numpy"
icon
Bengaluru (Bangalore)
icon
4 - 16 yrs
icon
₹40L - ₹60L / yr
  • Developing telemetry software to connect Junos devices to the cloud
  • Fast prototyping and laying the SW foundation for product solutions
  • Moving prototype solutions to a production cloud multitenant SaaS solution
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
  • Build analytics tools that utilize the data pipeline to provide significant insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with partners including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics specialists to strive for greater functionality in our data systems.

Qualification and Desired Experiences

  • Master in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background
  • 5+ years experiences building data pipelines for data science-driven solutions
  • Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning model
  • Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow
  • Good team worker with excellent interpersonal skills written, verbal and presentation
  • Create and maintain optimal data pipeline architecture,
  • Assemble large, sophisticated data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Experience with AWS, S3, Flink, Spark, Kafka, Elastic Search
  • Previous work in a start-up environment
  • 3+ years experiences building data pipelines for data science-driven solutions
  • Master in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background
  • We are looking for a candidate with 9+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
  • Experience with big data tools: Hadoop, Spark, Kafka, etc.
  • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
  • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
  • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
  • Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning model
  • Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and find opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Proven understanding of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Strong project management and interpersonal skills.
  • Experience supporting and working with multi-functional teams in a multidimensional environment.
Job posted by
Ashwini Miniyar
Data Warehouse (DWH)
Informatica
ETL
Python
DevOps
Kubernetes
Amazon Web Services (AWS)
icon
Chennai
icon
1 - 8 yrs
icon
₹2L - ₹20L / yr
We are cloud based company working on secureity projects.

Good Python developers / Data Engineers / Devops engineers
Exp: 1-8years
Work loc: Chennai. / Remote support
Job posted by
sharmila padmanaban

Data Engineer

at NeenOpal Intelligent Solutions Private Limited

Founded 2016  •  Services  •  20-100 employees  •  Bootstrapped
ETL
Python
Amazon Web Services (AWS)
SQL
PostgreSQL
icon
Remote, Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
₹6L - ₹12L / yr

We are actively seeking a Senior Data Engineer experienced in building data pipelines and integrations from 3rd party data sources by writing custom automated ETL jobs using Python. The role will work in partnership with other members of the Business Analytics team to support the development and implementation of new and existing data warehouse solutions for our clients. This includes designing database import/export processes used to generate client data warehouse deliverables.

 

Requirements
  • 2+ Years experience as an ETL developer with strong data architecture knowledge around data warehousing concepts, SQL development and optimization, and operational support models.
  • Experience using Python to automate ETL/Data Processes jobs.
  • Design and develop ETL and data processing solutions using data integration tools, python scripts, and AWS / Azure / On-Premise Environment.
  • Experience / Willingness to learn AWS Glue / AWS Data Pipeline / Azure Data Factory for Data Integration.
  • Develop and create transformation queries, views, and stored procedures for ETL processes, and process automation.
  • Document data mappings, data dictionaries, processes, programs, and solutions as per established standards for data governance.
  • Work with the data analytics team to assess and troubleshoot potential data quality issues at key intake points such as validating control totals at intake and then upon transformation, and transparently build lessons learned into future data quality assessments
  • Solid experience with data modeling, business logic, and RESTful APIs.
  • Solid experience in the Linux environment.
  • Experience with NoSQL / PostgreSQL preferred
  • Experience working with databases such as MySQL, NoSQL, and Postgres, and enterprise-level connectivity experience (such as connecting over TLS and through proxies).
  • Experience with NGINX and SSL.
  • Performance tune data processes and SQL queries, and recommend and implement data process optimization and query tuning techniques.
Job posted by
Pavel Gupta

Bigdata Engineer

at BDIPlus

Founded 2014  •  Product  •  100-500 employees  •  Profitable
Big Data
Hadoop
Java
Python
PySpark
kafka
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹5L - ₹12L / yr

Roles and responsibilities:

 

  1. Responsible for development and maintenance of applications with technologies involving Enterprise Java and Distributed  technologies.
  2. Experience in Hadoop, Kafka, Spark, Elastic Search, SQL, Kibana, Python, experience w/ machine learning and Analytics     etc.
  3. Collaborate with developers, product manager, business analysts and business users in conceptualizing, estimating and developing new software applications and enhancements..
  4. Collaborate with QA team to define test cases, metrics, and resolve questions about test results.
  5. Assist in the design and implementation process for new products, research and create POC for possible solutions.
  6. Develop components based on business and/or application requirements
  7. Create unit tests in accordance with team policies & procedures
  8. Advise, and mentor team members in specialized technical areas as well as fulfill administrative duties as defined by support process
  9. Work with cross-functional teams during crisis to address and resolve complex incidents and problems in addition to assessment, analysis, and resolution of cross-functional issues. 
Job posted by
Silita S

Machine Learning Engineers

at Ignite Solutions

Founded 2008  •  Products & Services  •  20-100 employees  •  Profitable
Machine Learning (ML)
Python
Data Science
icon
Pune
icon
3 - 7 yrs
icon
₹7L - ₹15L / yr
We are looking for a Machine Learning Engineer with 3+ years of experience with a background in Statistics and hands-on experience in the Python ecosystem, using sound  Software Engineering practices. Skills & Knowledge: - Formal knowledge of fundamentals of probability & statistics along with the ability to apply basic statistical analysis methods like hypothesis testing, t-tests, ANOVA etc. - Hands-on knowledge of data formats, data extraction, loading, wrangling, transformation, pre-processing and analysis. - Thorough understanding of data-modeling and machine-learning concepts - Complete understanding and ability to apply, implement and adapt standard implementations of machine learning algorithms - Good understanding and ability to apply and adapt Neural Networks and Deep Learning, including common high-level Deep Learning architectures like CNNs and RNNs - Fundamentals of computer science & programming, especially Data structures (like multi-dimensional arrays, trees, and graphs) and Algorithms (like searching, sorting, and dynamic programming) - Fundamentals of software engineering and system design, such as requirements analysis, REST APIs, database queries, system and library calls, version control, etc. Languages and Libraries: - Hands-on experience with Python and Python Libraries for data analysis and machine learning, especially Scikit-learn, Tensorflow, Pandas, Numpy, Statsmodels, and Scipy. - Experience with R and its ecosystem is a plus - Knowledge of other open source machine learning and data modeling frameworks like Spark MLlib, H2O, etc. is a plus
Job posted by
Juzar Malubhoy

NLP Engineer

at India's first Options Trading Analytics platform on mobile.

Natural Language Processing (NLP)
NLP
Keras
Scikit-Learn
icon
Mumbai
icon
1 - 3 yrs
icon
₹3.5L - ₹5L / yr
Experience : 1-2 years

Location : Andheri East

Notice Period: Immediate-15 days

Responsibilities:
1. Study and transform data science prototypes.
2. Design NLP applications.
3. Select appropriate annotated datasets for Supervised Learning methods.
4. Find and implement the right algorithms and tools for NLP tasks.
5. Develop NLP systems according to requirements.
6. Train the developed model and run evaluation experiments.
7. Perform statistical analysis of results and refine models.
8. Extend ML libraries and frameworks to apply in NLP tasks.
9. Use effective text representations to transform natural language into useful features.
10. Develop APIs to deliver deciphered results, optimized for time and memory.

Requirements:
1. Proven experience as an NLP Engineer of at least one year.
2. Understanding of NLP techniques for text representation, semantic extraction techniques, data
structures and modeling.
3. Ability to effectively design software architecture.
4. Deep understanding of text representation techniques (such as n-grams, bag of words, sentiment
analysis etc), statistics and classification algorithms.
5. Hands on Experience of Knowledge of Python of more than a year.
6. Ability to write robust and testable code.
7. Experience with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn)
8. Strong communication skills.
9. An analytical mind with problem-solving abilities.
10. Bachelor Degree in Computer Science, Mathematics, Computational Linguistics.
Job posted by
Kavita Verma

Data Scientist

at IQVIA

Founded 1969  •  Products & Services  •  100-1000 employees  •  Profitable
Python
Scala
Spark
Big Data
Data Science
scala
icon
Remote, Kochi (Cochin)
icon
1 - 5 yrs
icon
₹4L - ₹10L / yr
Job Description Summary
Skill sets in Job Profile
1)Machine learning development using Python or Scala Spark
2)Knowledge of multiple ML algorithms like Random forest, XG boost, RNN, CNN, Transform learning etc..
3)Aware of typical challenges in machine learning implementation and respective applications

Good to have
1)Stack development or DevOps team experience
2)Cloud service (AWS, Cloudera), SAAS, PAAS
3)Big data tools and framework
4)SQL experience

Job posted by
Sony Shetty
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Searce Inc?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort