Loading...

{{notif_text}}

The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, is LIVE.Join now!
{{hours_remaining}}:{{minutes_remaining}}:{{seconds_remaining}}Want to save 90% of your recruiting time? Learn how in our next webinar on 22nd March at 3 pmLearn more

Senior Specialist - BigData Engineering
Posted by Jesvin Varghese

apply to this job

Locations

Mumbai, NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore)

Experience

5 - 10 years

Salary

{{1500000 / ('' == 'MONTH' ? 12 : 100000) | number}} - {{3500000 / ('' == 'MONTH' ? 12 : 100000) | number}} {{'' == 'MONTH' ? '/mo' : 'lpa'}}

Skills

Hadoop
Scala
Python
Big Data
Spark
Apache
AWS
Linux

Job description

Role Brief: 6 + years of demonstrable experience designing technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions. Brief about Fractal & Team : Fractal Analytics is Leading Fortune 500 companies to leverage Big Data, analytics, and technology to drive smarter, faster and more accurate decisions in every aspect of their business.​​​​​​​ Our Big Data capability team is hiring technologists who can produce beautiful & functional code to solve complex analytics problems. If you are an exceptional developer and who loves to push the boundaries to solve complex business problems using innovative solutions, then we would like to talk with you.​​​​​​​ Job Responsibilities : Provides technical leadership in BigData space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc..NoSQL stores like Cassandra, HBase etc) across Fractal and contributes to open source Big Data technologies. Visualize and evangelize next generation infrastructure in Big Data space (Batch, Near RealTime, RealTime technologies). Evaluate and recommend Big Data technology stack that would align with company's technology Passionate for continuous learning, experimenting, applying and contributing towards cutting edge open source technologies and software paradigms Drive significant technology initiatives end to end and across multiple layers of architecture Provides strong technical leadership in adopting and contributing to open source technologies related to BigData across the company. Provide strong technical expertise (performance, application design, stack upgrades) to lead Platform Engineering Defines and Drives best practices that can be adopted in BigData stack. Evangelizes the best practices across teams and BUs. Drives operational excellence through root cause analysis and continuous improvement for BigData technologies and processes and contributes back to open source community. Provide technical leadership and be a role model to data engineers pursuing technical career path in engineering Provide/inspire innovations that fuel the growth of Fractal as a whole EXPERIENCE : Must Have : Ideally, This Would Include Work On The Following Technologies Expert-level proficiency in at-least one of Java, C++ or Python (preferred). Scala knowledge a strong advantage. Strong understanding and experience in distributed computing frameworks, particularly Apache Hadoop 2.0 (YARN; MR & HDFS) and associated technologies -- one or more of Hive, Sqoop, Avro, Flume, Oozie, Zookeeper, etc.Hands-on experience with Apache Spark and its components (Streaming, SQL, MLLib) is a strong advantage. Operating knowledge of cloud computing platforms (AWS, especially EMR, EC2, S3, SWF services and the AWS CLI) Experience working within a Linux computing environment, and use of command line tools including knowledge of shell/Python scripting for automating common tasks Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works. A technologist - Loves to code and design In addition, the ideal candidate would have great problem-solving skills, and the ability & confidence to hack their way out of tight corners. Relevant Experience : Java or Python or C++ expertise Linux environment and shell scripting Distributed computing frameworks (Hadoop or Spark) Cloud computing platforms (AWS) Good to have : Statistical or machine learning DSL like R Distributed and low latency (streaming) application architecture Row store distributed DBMSs such as Cassandra Familiarity with API design Qualification:​​​​​​​ B.E/B.Tech/M.Tech in Computer Science or related technical degree OR Equivalent

About Fractal Analytics

Fractal Analytics helps global Fortune 500 companies power every human decision in the enterprise by bringing analytics and AI to the decision.

Founded

2000

Type

Products & Services

Size

250+ employees

Stage

Profitable
View company

Similar jobs

Senior Data Scientist

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
3 - 9 years
Experience icon
11 - 20 lacs/annum

About UpGrad : About us: UpGrad is an online education platform building the careers of tomorrow by offering the most industry-relevant programs in an immersive learning experience. Our mission is to create a new digital-first learning experience to deliver tangible career impact to individuals at scale. UpGrad currently offers programs in Data Analytics, Product Management, Digital Marketing, and Entrepreneurship, and was rated as one of the top 10 most innovative companies in India for 2017 - https://www.fastcompany.com/most-innovative-companies/2017/sectors/india . We plan to launch 6 more programs in technology and management education. UpGrad is co-founded by 3 IITD alumni, and the 4th co-founder is serial entrepreneur Ronnie Screwvala. UpGrad has a committed capital of 100Cr and in the first year of operations, has built the largest revenue generating online program in India (PG Diploma in Data Analytics) and the largest enrolment online program in India (Startup India learning program). UpGrad is looking for people passionate about management and education to help design learning programs for working professionals to stay sharp and stay relevant and help build the careers of tomorrow. Position : Senior Data Scientist Position Type : Full Time Location : Mumbai Job Description: Are you excited by the challenge and the opportunity of applying data-science and data-analytics techniques to the fast developing education technology domain? Do you look forward to, the sense of ownership and achievement that comes with innovating and creating data products from scratch and pushing it live into Production systems? Do you want to work with a team of highly motivated members who are on a mission to empower individuals through education? If this is you, come join us and become a part of the UpGrad technology team. At UpGrad the technology team enables all the facets of the business - whether it’s bringing efficiency to our marketing and sales initiatives, to enhancing our student learning experience, to empowering our content, delivery and student success teams, to aiding our student’s for their desired career outcomes. We play the part of bringing together data & tech to solve these business problems and opportunities at hand. We are looking for an highly skilled, experienced and passionate data-scientist who can come on-board and help create the next generation of data-powered education tech product. The ideal candidate would be someone who has worked in a Data Science role before wherein he/she is comfortable working with unknowns, evaluating the data and the feasibility of applying scientific techniques to business problems and products, and have a track record of developing and deploying data-science models into live applications. Someone with a strong math, stats, data-science background, comfortable handling data (structured+unstructured) as well as strong engineering know-how to implement/support such data products in Production environment. Ours is a highly iterative and fast-paced environment, hence being flexible, communicating well and attention-to-detail are very important too. The ideal candidate should be passionate about the customer impact and comfortable working with multiple stakeholders across the company. Basic Qualifications: 3+ years of experience in analytics, data science, machine learning or comparable role Bachelor's degree in Computer Science, Data Science/Data Analytics, Math/Statistics or related discipline Experience in building and deploying Machine Learning models in Production systems Strong analytical skills: ability to make sense out of a variety of data and its relation/applicability to the business problem or opportunity at hand Strong programming skills: comfortable with Python - pandas, numpy, scipy, matplotlib; Databases - SQL and noSQL Strong communication skills: ability to both formulate/understand the business problem at hand as well as ability to discuss with non data-science background stakeholders Comfortable dealing with ambiguity and competing objectives Preferred Qualifications: Experience in Text Analytics, Natural Language Processing Advanced degree in Data Science/Data Analytics or Math/Statistics Comfortable with data-visualization tools and techniques Knowledge of AWS and Data Warehousing Passion for building data-products for Production systems - a strong desire to impact the product through data-science techniques

Job posted by
apply for job
apply for job
Omkar Pradhan picture
Omkar Pradhan
Job posted by
Omkar Pradhan picture
Omkar Pradhan
Apply for job
apply for job

Hadoop Engineers

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 7 years
Experience icon
24 - 30 lacs/annum

Position Description Demonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans Models compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity Provides and supports the implementation of business solutions Provides support to the business Troubleshoots business, production issues and on call support. Minimum Qualifications BS/MS in Computer Science or related field 5+ years’ experience building web applications Solid understanding of computer science principles Excellent Soft Skills Understanding the major algorithms like searching and sorting Strong skills in writing clean code using languages like Java and J2EE technologies. Understanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate Deep knowledge of web technologies such as HTML5, CSS, JSON Good understanding of continuous integration tools and frameworks like Jenkins Experience in working with the Agile environments, like Scrum and Kanban. Experience in dealing with the performance tuning for very large-scale apps. Experience in writing scripting using Perl, Python and Shell scripting. Experience in writing jobs using Open source cluster computing frameworks like Spark Relational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive. Aptitude for writing clean, succinct and efficient code. Attitude to thrive in a fun, fast-paced start-up like environment

Job posted by
apply for job
apply for job
Sampreetha Pai picture
Sampreetha Pai
Job posted by
Sampreetha Pai picture
Sampreetha Pai
Apply for job
apply for job

Data Scientist

Founded 2007
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
0 - 3 years
Experience icon
2 - 6 lacs/annum

We are an AI-based education platform, that pushes for directed, focused, smart learning, that helps every individual user at a personal level, with the power of AI. Responsibilities & Skills Required: • Excellent programming skill, being language agnostic and being able to implement the tested out models, into the existing platform seamlessly. • Reinforcement Learning, Natural Language Processing (NLP), Neural Networks, Text Clustering, Topic Modelling, Information Extraction, Information Retrieval, Deep learning, Machine learning, cognitive science, and analytics. • Proven experience implementing and deploying advanced AI solutions using R/Python. • Apply machine learning algorithms, statistical data analysis, text clustering, summarization, extracting insights from multiple data points. • Excellent understanding of Analytics concepts and methodologies including machine learning (unsupervised and supervised). • Hand on in handling large amounts of structured and unstructured data. Skills Required: Skills: • Visualisation using d3.js, Chart.js, Tableau • Javascript • Python, R, NLP, NLG, Machine Learning, Deep Learning & Neural Networks • CNN • Reinforcement Learning • Unsupervised Learning • Supervised Learning • Deep Neural Networks • Frameworks : Keras/tensorflow

Job posted by
apply for job
apply for job
Flyn Sequeira picture
Flyn Sequeira
Job posted by
Flyn Sequeira picture
Flyn Sequeira
Apply for job
apply for job

Lead Data Scientist

Founded 2018
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
2 - 5 years
Experience icon
20 - 30 lacs/annum

Spotmentor is focussed on using the Intelligence-age tools and technologies like AI and Text analytics to create HR technology products which go beyond compliance and ERPs to give HR the power to become strategic, improve business results and increase the competitiveness. The HR and People departments have long sought to become strategic partners with businesses. We are focussed on taking this concept out of the board room meetings and making it a reality and you can be a part of this journey. At the end of it, you would be able to claim that there was an inflection point in History, which changed how business was transacted and you made that happen. Our first product is a Learning and Skill development platform which helps the organisations to acquire capabilities critical for them by helping employees attain their best potential through learning opportunities. Spotmentor was started by 4 IIT Kharagpur alumni with experiences in creating Technology products and Management consulting. We are looking for a Data Scientist who will help discover the information hidden in vast amounts of data, and help us make smarter decisions that benefit the employees of our customer organisations. Your primary focus will be on applying data mining techniques, doing statistical analysis, and building high quality prediction systems using structured and unstructured data. Technical Responsibilities: - Selecting features, building and optimizing classifiers using machine learning techniques - Data mining using state-of-the-art methods - Extending the existing data sets with third party sources of information - Processing, cleansing, and verifying the integrity of data used for analysis - Build recommendation systems - Automate scoring of documents using machine learning techniques Salary: This is a founding team member role with a salary of 20 Lacs to 30 Lacs per year and a meaningful ESOP component. Location: Gurgaon We believe in making Spotmentor the best place for the pursuit of excellence and diversity of opinions is an important tool to achieve that. Although as a startup our primary objective is growth, Spotmentor is focussed on creating a diverse and inclusive workplace where everyone can attain their best potential and we welcome female, minority and specially abled candidates to apply.

Job posted by
apply for job
apply for job
Deepak Singh picture
Deepak Singh
Job posted by
Deepak Singh picture
Deepak Singh
Apply for job
apply for job

Senior NLP Engineer

Founded 2018
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore), NCR (Delhi | Gurgaon | Noida)
Experience icon
1 - 3 years
Experience icon
12 - 15 lacs/annum

We're looking for Senior NLP Engineer (2+ years experience) for our company - Spotmentor Technologies. Right now our Technology team has 5 members and this role is for early team member and carries significant ESOPs with it. We need someone who can lead the NLP function with both vision and hands-on work and is excited to use this area to develop B2B products for enterprise productivity.RESPONSIBILITIES----------------------- • Collaborate with cross-functional team members to develop software libraries, tools, and methodologies as critical components of our computation platforms. • Use independent judgment to take existing code, understand its function, and change/enhance as needed. • Work as a team leader rather than a member. • Capable of adding valuable inputs to existing algorithms by reading NLP research papers.REQUIREMENTS-------------------- • Proficient in Python with sound knowledge in the data science libraries namely Numpy, Pandas, NLTK / spaCy etc. • Prior experience in building a fully functional NLP based Machine Learning Model with good results (NER/Classification/Topic Modeling). • Expert data scientist with professionalism in text classification, feature engineering, using embeddings, pos etc. • Knowledge of writing database queries (SQL/NoSQL). • Some background in information retrieval systems is a big plus.

Job posted by
apply for job
apply for job
Deepak Singh picture
Deepak Singh
Job posted by
Deepak Singh picture
Deepak Singh
Apply for job
apply for job

Machine Learning Data Engineer

Founded 2006
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore)
Experience icon
1 - 4 years
Experience icon
12 - 15 lacs/annum

Machine Learning Data Engineer Engineering Gurgaon, Haryana, India Job Description Who are we? BlueOptima provides industry leading objective metrics in software development using it’s proprietary Coding Effort Analytics that enable large organisations to deliver better software, faster, and at lower cost. Founded in 2007, BlueOptima is a profitable, independent, high growth software vendor commercialising technology initially devised in seminal research carried out at Cambridge University. We are headquartered in London with offices in New York, Bangalore, and Gurgaon. BlueOptima’s technology is deployed with global enterprises driving value from their software development activities For example, we work with seven of the world’s top ten Universal Banks (by revenue), three of the world’s top ten telecommunications companies (by revenue, excl. China). Our technology is pushing the limits of complex analytics on large data-sets with more than 15 billion static source code metric observations of software engineers working in an Enterprise software development environment. BlueOptima is an Equal Opportunities employer. Whom are we looking for? BlueOptima has a truly unique collection of vast datasets relating to the changes that software developers make in source code when working in an enterprise software development environment. We are looking for analytically minded individuals with expertise in statistical analysis, Machine Learning and Data Engineering. Who will work on real world problems, unique to the data that we have, develop new algorithms and tools to solve problems. The use of Machine Learning is a growing internal incentive and we have a large range of opportunities, to expand the value that we deliver to our clients. What does the role involve? As a Data Engineer you will be take problems and ideas from both our onsite Data Scientists, analyze what is involved, spec and build intelligent solutions using our data. You will take responsibility for the end to end process. Further to this, you are encouraged to identify new ideas, metrics and opportunities within our dataset and identify and report when an idea or approach isn’t being successful and should be stopped. You will use tools ranging from advance Machine Learning algorithms to Statistical approaches and will be able to select the best tool for the job. Finally, you will support and identify improvements to our existing algorithms and approaches. Responsibilities include: Solve problems using Machine Learning and advanced statistical techniques based on business needs. Identify opportunities to add value and solve problems using Machine Learning across the business. Develop tools to help senior managers identify actionable information based on metrics like BlueOptima Coding Effort and explain the insight they reveal to senior managers to support decision-making. Develop additional & supporting metrics for the BlueOptima product and data predominantly using R and Python and/or similar statistical tools. Producing ad hoc or bespoke analysis and reports. Coordinate with both engineers & client side data-scientists to understand requirements and opportunities to add value. Spec the requirements to solve a problem and identify the critical path and timelines and be able to give clear estimates. Resolve issues and find improvements to existing Machine Learning solution and explain their impacts. ESSENTIAL SKILLS / EXPERIENCE REQUIRED: Minimum Bachelor's degree in Computer Science/Statistics/Mathematics or equivalent. Minimum of 3+ years experience in developing solutions using Machine learning Algorithms. Strong Analytical skills demonstrated through data engineering or similar experience. Strong fundamentals in Statistical Analysis using R or a similar programming language. Experience apply Machine Learning algorithms and techniques to resolve problems on structured and unstructured data. An in depth understanding of a wide range of Machine Learning techniques, and an understanding of which algorithms are suited to which problems. A drive to not only identify a solution to a technical problem but to see it all the way through to inclusion in a product. Strong written and verbal communication skills Strong interpersonal and time management skills DESIRABLE SKILLS / EXPERIENCE: Experience with automating basic tasks to maximise time for more important problems. Experience with PostgreSQL or similar Rational Database. Experience with MongoDB or similar nosql database. Experience with Data Visualisation experience (via Tableau, Qlikview, SAS BI or similar) is preferable. Experience using task tracking systems e.g. Jira and distributed version control systems e.g. Git. Be comfortable explaining very technical concepts to non-expert people. Experience of project management and designing processes to deliver successful outcomes. Why work for us? Work with a unique a truly vast collection of datasets Above market remuneration Stimulating challenges that fully utilise your skills Work on real-world technical problems to which solution cannot simply be found on the internet Working alongside other passionate, talented engineers Hardware of your choice Our fast-growing company offers the potential for rapid career progression

Job posted by
apply for job
apply for job
Deepthi Ravindran picture
Deepthi Ravindran
Job posted by
Deepthi Ravindran picture
Deepthi Ravindran
Apply for job
apply for job

Machine Learning Data Engineer

Founded 2006
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore)
Experience icon
3 - 9 years
Experience icon
12 - 25 lacs/annum

Job Description Who are we? BlueOptima provides industry leading objective metrics in software development using it’s proprietary Coding Effort Analytics that enable large organisations to deliver better software, faster, and at lower cost. Founded in 2007, BlueOptima is a profitable, independent, high growth software vendor commercialising technology initially devised in seminal research carried out at Cambridge University. We are headquartered in London with offices in New York, Bangalore, and Gurgaon. BlueOptima’s technology is deployed with global enterprises driving value from their software development activities For example, we work with seven of the world’s top ten Universal Banks (by revenue), three of the world’s top ten telecommunications companies (by revenue, excl. China). Our technology is pushing the limits of complex analytics on large data-sets with more than 15 billion static source code metric observations of software engineers working in an Enterprise software development environment. BlueOptima is an Equal Opportunities employer. Whom are we looking for? BlueOptima has a truly unique collection of vast datasets relating to the changes that software developers make in source code when working in an enterprise software development environment. We are looking for analytically minded individuals with expertise in statistical analysis, Machine Learning and Data Engineering. Who will work on real world problems, unique to the data that we have, develop new algorithms and tools to solve problems. The use of Machine Learning is a growing internal incentive and we have a large range of opportunities, to expand the value that we deliver to our clients. What does the role involve? As a Data Engineer you will be take problems and ideas from both our onsite Data Scientists, analyze what is involved, spec and build intelligent solutions using our data. You will take responsibility for the end to end process. Further to this, you are encouraged to identify new ideas, metrics and opportunities within our dataset and identify and report when an idea or approach isn’t being successful and should be stopped. You will use tools ranging from advance Machine Learning algorithms to Statistical approaches and will be able to select the best tool for the job. Finally, you will support and identify improvements to our existing algorithms and approaches. Responsibilities include: Solve problems using Machine Learning and advanced statistical techniques based on business needs. Identify opportunities to add value and solve problems using Machine Learning across the business. Develop tools to help senior managers identify actionable information based on metrics like BlueOptima Coding Effort and explain the insight they reveal to senior managers to support decision-making. Develop additional & supporting metrics for the BlueOptima product and data predominantly using R and Python and/or similar statistical tools. Producing ad hoc or bespoke analysis and reports. Coordinate with both engineers & client side data-scientists to understand requirements and opportunities to add value. Spec the requirements to solve a problem and identify the critical path and timelines and be able to give clear estimates. Resolve issues and find improvements to existing Machine Learning solution and explain their impacts. ESSENTIAL SKILLS / EXPERIENCE REQUIRED: Minimum Bachelor's degree in Computer Science/Statistics/Mathematics or equivalent. Minimum of 3+ years experience in developing solutions using Machine learning Algorithms. Strong Analytical skills demonstrated through data engineering or similar experience. Strong fundamentals in Statistical Analysis using R or a similar programming language. Experience apply Machine Learning algorithms and techniques to resolve problems on structured and unstructured data. An in depth understanding of a wide range of Machine Learning techniques, and an understanding of which algorithms are suited to which problems. A drive to not only identify a solution to a technical problem but to see it all the way through to inclusion in a product. Strong written and verbal communication skills Strong interpersonal and time management skills DESIRABLE SKILLS / EXPERIENCE: Experience with automating basic tasks to maximise time for more important problems. Experience with PostgreSQL or similar Rational Database. Experience with MongoDB or similar nosql database. Experience with Data Visualisation experience (via Tableau, Qlikview, SAS BI or similar) is preferable. Experience using task tracking systems e.g. Jira and distributed version control systems e.g. Git. Be comfortable explaining very technical concepts to non-expert people. Experience of project management and designing processes to deliver successful outcomes. Why work for us? Work with a unique a truly vast collection of datasets Above market remuneration Stimulating challenges that fully utilise your skills Work on real-world technical problems to which solution cannot simply be found on the internet Working alongside other passionate, talented engineers Hardware of your choice Our fast-growing company offers the potential for rapid career progression

Job posted by
apply for job
apply for job
Rashmi Anand picture
Rashmi Anand
Job posted by
Rashmi Anand picture
Rashmi Anand
Apply for job
apply for job

Data Scientist

Founded 2003
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore), Chennai, Pune, Mumbai
Experience icon
7 - 13 years
Experience icon
14 - 20 lacs/annum

Requirement Specifications: Job Title:: Data Scientist Experience:: 7 to 10 Years Work Location:: Mumbai, Bengaluru, Chennai Job Role:: Permanent Notice Period :: Immediate to 60 days Job description: • Support delivery of one or more data science use cases, leading on data discovery and model building activities Conceptualize and quickly build POC on new product ideas - should be willing to work as an individual contributor • Open to learn, implement newer tools\products • Experiment & identify best methods\techniques, algorithms for analytical problems • Operationalize – Work closely with the engineering, infrastructure, service management and business teams to operationalize use cases Essential Skills • Minimum 2-7 years of hands-on experience with statistical software tools: SQL, R, Python • 3+ years’ experience in business analytics, forecasting or business planning with emphasis on analytical modeling, quantitative reasoning and metrics reporting • Experience working with large data sets in order to extract business insights or build predictive models • Proficiency in one or more statistical tools/languages – Python, Scala, R, SPSS or SAS and related packages like Pandas, SciPy/Scikit-learn, NumPy etc. • Good data intuition / analysis skills; sql, plsql knowledge must • Manage and transform variety of datasets to cleanse, join, aggregate the datasets • Hands-on experience running in running various methods like Regression, Random forest, k-NN, k-Means, boosted trees, SVM, Neural Network, text mining, NLP, statistical modelling, data mining, exploratory data analysis, statistics (hypothesis testing, descriptive statistics) • Deep domain (BFSI, Manufacturing, Auto, Airlines, Supply Chain, Retail & CPG) knowledge • Demonstrated ability to work under time constraints while delivering incremental value. • Education Minimum a Masters in Statistics, or PhD in domains linked to applied statistics, applied physics, Artificial Intelligence, Computer Vision etc. BE/BTECH/BSC Statistics/BSC Maths

Job posted by
apply for job
apply for job
Sowmya M picture
Sowmya M
Job posted by
Sowmya M picture
Sowmya M
Apply for job
apply for job

machine learning

Founded 2018
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 2 years
Experience icon
3 - 5 lacs/annum

1. Image processing and classification of images to classify images of areas 2.images taken from satellite into categories, like park, open area, road, forest, pond or roof top 3. Building robust forecasting models for complex time series data, with several time series correlated with each other 4. Train word embedding models for natural language processing applications in Python using Gensim. Train word2vec word embedding model on text data, visualize a trained word embedding model using Principal Component Analysis and load pre-trained word2vec and GloVe word embedding models from Googleand Stanford. Develop models for natural language understanding 4. Using libopenCV, prepare models to identify and track intrusive objects in an open field, under video surveillance 5. Personalization and recommendation engines for certain classes of objects.

Job posted by
apply for job
apply for job
pruthvi k picture
pruthvi k
Job posted by
pruthvi k picture
pruthvi k
Apply for job
apply for job

Data Scientist

via Rapido
Founded 2016
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 3 years
Experience icon
15 - 20 lacs/annum

Job Description Job Description Do you want to join an innovative team of scientists who use machine learning, NLP and statistical techniques to provide the best customer experience on the earth? Do you want to change the way that people work with customer experience? Our team wants to lead the technical innovations in these spaces and set the bar for every other company that exists. We love data, and we have lots of it. We're looking for business intelligence engineer to own end-to-end business problems and metrics which would have a direct impact on the bottom line of our business while improving customer experience. If you see how big data and cutting-edge technology can be used to improve customer experience, if you love to innovate, if you love to discover knowledge from big structured and unstructured data and if you deliver results, then we want you to be in our team. Major responsibilities Analyze and extract relevant information from large amounts of both structured and unstructured data to help automate and optimize key processes Design structured, multi-source data solutions to deliver the dashboards and reports that make data actionable Drive the collection of new data and the refinement of existing data sources to continually improve data quality Support data analysts and product managers by turning business requirements into functional specifications and then executing delivery Lead the technical lifecycle of data presentation from data sourcing to transforming into user-facing metrics Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation Basic Qualifications Bachelors or Master’s Degree in Computer Science, Systems Analysis, or related field 3+ years’ experience in data modeling, ETL development, and Data Warehousing 3+ years’ experience with BI/DW/ETL projects. Strong background in data relationships, modeling, and mining Technical guru; SQL expert and God in one of these Python/ Spark/ Scala/ Tulia Strong communication and data presentation skills Strong problem solving ability Preferred Qualifications Experience working with large-scale data warehousing and analytics projects, including using AWS technologies – S3, EC2, Data-pipeline and other big data technologies. Distributed programming experience is highly recommended 2+ years of industry experience in predictive modeling and analysis Technically deep and business savvy enough to interface with all levels and disciplines within the organization.

Job posted by
apply for job
apply for job
Pushpa Latha picture
Pushpa Latha
Job posted by
Pushpa Latha picture
Pushpa Latha
Apply for job
apply for job
Want to apply for this role at Fractal Analytics?
Hiring team responds within a day
apply for this job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.