Julien Innovations India Pvt Ltd is a start-up building products in NLP, Speech, RPA and Analytics. Our products have been commercialized and funded in Auto industry. At present we are looking to add hands particularly on speech modules. What we need is a) To work on Kaldi well b) AI, speech, NLP related programming skills in related technologies c) Sound basic knowledge of stat, math d) Prefer 2+ years industry experience in speech technologies e) High on learning ability, passion, integrity f) Start-up mentaility, values learning and achievement more than looking for a good job. What you get is - Tremendous learning opportunity - Work on innovative and fundamental speech, NLP, RPA analytics programs - Work with top clients directly - Fast paced work, open culture - ESOPS for deserving after a good work record
Selecting features, building and optimizing models using machine learning techniques Data mining using state-of-the-art methods Extending company’s data with third party sources of information when needed Enhancing data collection procedures to include information that is relevant for building analytic systems Processing, cleansing, and verifying the integrity of data used for analysis Doing ad-hoc analysis and presenting results in a clear manner Creating automated anomaly detection systems and constant tracking of its performance Adopting new research methodologies including deep learning (CNNs LSTMs)on projects
Seeking a data expert in CogniCor Want to play around with data to come up with new methods to enhance our virtual agent ? Want to build a system as intelligent as you are? Want to be part of a team that’s working on creating the next level of AI? Want to be part of a company that holds 6 patents? Are you a person who enjoys analysing data and gets excited when you get interesting data relations. Join our Team. Job Responsibilities Develop new analytical methods and/or tools as required. Collaborate with unit managers, end users, development staff, and other stakeholders to integrate data mining results with existing systems. Respond to and resolve data mining performance issues. Monitor data mining system performance and implement efficiency improvements. Use analytics to drive key success metrics related to yield management and revenue generation. Data mining using state-of-the-art methods Extending company’s data with third party sources of information when needed Enhancing data collection procedures to include information that is relevant for building analytic systems Processing, cleansing, and verifying the integrity of data used for analysis Doing ad-hoc analysis and presenting results in a clear manner Creating automated anomaly detection systems and constant tracking of its performance Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities Monitoring performance and advising any necessary infrastructure changes Defining data retention policies Data Visualization & Communication and Reporting Skills and Qualifications Unique blend of analytical, mathematical and technical skills. Must have good scripting and programming skills in Python, Java etc. Bachelor or Master’s Degree in applied statistics, data mining or relevant field. Experience with common data science toolkits, such as R, NumPy, etc Experience with data visualisation tools, such as D3.js, GGplot, etc. Proficiency in using Database query languages with NoSQL databases, such as MongoDB etc. Exposure to machine learning techniques 1-3 years of Experience in relevant field About CogniCor We are much sought after for our award winning AI Technology. We’ve been built on innovation, and we’ve implemented top-of-the-class Virtual Agents across various sectors in Asia, USA and Europe. We offer a start up environment that fosters thinking out of the box, and rewards initiative. Here, the sky is not the limit - your ambitions are. You’ll research top technologies, be exposed to AI technology implementation in real life, and interact with top minds in AI and NLP.
Finding Innovative Solutions is a challenge . If you are a person excited to take up challenges , WE NEED YOU! Seeking a machine learning expert in CogniCor Want to be part of a team that’s working on creating the next level of AI? Want to be part of a company that holds 6 patents? Are you are person who can apply innovative machine learning, deep learning, reinforcement learning, statistical techniques to create scalable solutions for business problems? If you are interested in creating and implementing new technologies, then we should talk! Job Responsibilities Use machine learning, statistical techniques to create new, scalable solutions for predictive/decisive problems Design, develop and evaluate models for predictive learning Establish automated processes for data analysis, model development, model validation and model implementation Research and implement novel machine learning and statistical approaches Basic Qualifications An MS/PhD in CS, Machine Learning,Statistics or in a highly quantitative field. PhD strongly preferred. Experience in Deep Learning, Reinforcement Learning Techniques and Frameworks such as Keras, Tensor flow ,DL4j,Theano etc. 4+ years of industrial experience in predictive modeling and analysis, predictive software development Strong Problem solving ability Good skills with Java or Python (or similar language) Experience in using R, Matlab, or any other statistical software Strong communication and data presentation skills About CogniCor We are much sought after for our award winning AI Technology. We are built on innovation, and have implemented top-of-the-class Virtual Agents across various sectors in Asia, USA and Europe. We offer a start up environment that fosters thinking out of the box, and reward initiatives. Here, the sky is not the limit - your ambitions are. Being the recipient of several prestigious awards like “Delighting tomorrow’s customer award” by British Telecom, “Most innovative web startup” award by European Commission, an employee at Cognicor would be exposed to real life implementation of AI technology and interaction with top minds in the field of AI and NLP. (The below content is only for the website, instead of ‘’About Cognicor’’) What you gain We’ve been built on innovation, and are much sought after for our award winning AI Technology. You’ll help us build and implement top-of-the-class Virtual Agents across various sectors in Asia, USA and Europe. We offer a start up environment that fosters thinking out of the box, and rewards initiative. Here, the sky is not the limit - your ambitions are. You’ll research top technologies, be exposed to AI technology implementation in real life, and interact with top minds in AI and NLP.
Identify valuable data sources and automate collection processes Undertake preprocessing of structured and unstructured data Analyze large amounts of information to discover trends and patterns Build predictive models and machine-learning algorithms Combine models through ensemble modeling Present information using data visualization techniques Propose solutions and strategies to business challenges Collaborate with engineering and product development teams
• Exposure to Deep Learning, Neural Networks, or related fields and a strong interest and desire to pursue them. • Experience in Natural Language Processing, Computer Vision, Machine Learning or Machine Intelligence (Artificial Intelligence). • Programming experience in Python. • Knowledge of machine learning frameworks like Tensorflow. • Experience with software version control systems like Github. • Understands the concept of Big Data like Hadoop, MongoDB, Apache Spark
What will you do: • Develop ML based models for financial portfolios and Investment advisory • Designing, implementing and backtesting investment strategies • Analyze investment behavior for portfolio managers to prevent frauds and malpractices • Coordinate with engineering team to incorporate models in the platform. What are we looking for: • Bachelors or Masters or PhD in Maths, Statistics, Computers • 2 to 4 years of experience • Exposure to Financial markets • Familiarity with Python/R/SciKit/Tensorflow • Familiar with SQL and page scrapping & other data manipulation tools and packages. Who is this role for: • Are you looking for a great technical challenge to solve and looking to build a new product ground up? • Do you want to work closely with leading technology solutions in world of finance and on cutting edge technologies like Algo-trading, Blockchain, etc? • Do you want to build software that will be used by millions of people all around the World and plays in 4 exciting technology domains – Social, Mobile, Analytics and Cloud (SMAC)? • Do you want to work with a team with a proven track record in both Finance domain and in Technology/Product? • Beyond this, do you have the drive to go through the cycle that is natural to any startup and finally, make it big? If you are looking for all of these, then Kristal is the place for you!!! Perks: • Apple MacBook • Working with a team of passionate people just like you who love to build products For further information visit us at www.kristal.ai
Job Description Who are we? BlueOptima provides industry leading objective metrics in software development using it’s proprietary Coding Effort Analytics that enable large organisations to deliver better software, faster, and at lower cost. Founded in 2007, BlueOptima is a profitable, independent, high growth software vendor commercialising technology initially devised in seminal research carried out at Cambridge University. We are headquartered in London with offices in New York, Bangalore, and Gurgaon. BlueOptima’s technology is deployed with global enterprises driving value from their software development activities For example, we work with seven of the world’s top ten Universal Banks (by revenue), three of the world’s top ten telecommunications companies (by revenue, excl. China). Our technology is pushing the limits of complex analytics on large data-sets with more than 15 billion static source code metric observations of software engineers working in an Enterprise software development environment. BlueOptima is an Equal Opportunities employer. Whom are we looking for? BlueOptima has a truly unique collection of vast datasets relating to the changes that software developers make in source code when working in an enterprise software development environment. We are looking for analytically minded individuals with expertise in statistical analysis, Machine Learning and Data Engineering. Who will work on real world problems, unique to the data that we have, develop new algorithms and tools to solve problems. The use of Machine Learning is a growing internal incentive and we have a large range of opportunities, to expand the value that we deliver to our clients. What does the role involve? As a Data Engineer you will be take problems and ideas from both our onsite Data Scientists, analyze what is involved, spec and build intelligent solutions using our data. You will take responsibility for the end to end process. Further to this, you are encouraged to identify new ideas, metrics and opportunities within our dataset and identify and report when an idea or approach isn’t being successful and should be stopped. You will use tools ranging from advance Machine Learning algorithms to Statistical approaches and will be able to select the best tool for the job. Finally, you will support and identify improvements to our existing algorithms and approaches. Responsibilities include: Solve problems using Machine Learning and advanced statistical techniques based on business needs. Identify opportunities to add value and solve problems using Machine Learning across the business. Develop tools to help senior managers identify actionable information based on metrics like BlueOptima Coding Effort and explain the insight they reveal to senior managers to support decision-making. Develop additional & supporting metrics for the BlueOptima product and data predominantly using R and Python and/or similar statistical tools. Producing ad hoc or bespoke analysis and reports. Coordinate with both engineers & client side data-scientists to understand requirements and opportunities to add value. Spec the requirements to solve a problem and identify the critical path and timelines and be able to give clear estimates. Resolve issues and find improvements to existing Machine Learning solution and explain their impacts. ESSENTIAL SKILLS / EXPERIENCE REQUIRED: Minimum Bachelor's degree in Computer Science/Statistics/Mathematics or equivalent. Minimum of 3+ years experience in developing solutions using Machine learning Algorithms. Strong Analytical skills demonstrated through data engineering or similar experience. Strong fundamentals in Statistical Analysis using R or a similar programming language. Experience apply Machine Learning algorithms and techniques to resolve problems on structured and unstructured data. An in depth understanding of a wide range of Machine Learning techniques, and an understanding of which algorithms are suited to which problems. A drive to not only identify a solution to a technical problem but to see it all the way through to inclusion in a product. Strong written and verbal communication skills Strong interpersonal and time management skills DESIRABLE SKILLS / EXPERIENCE: Experience with automating basic tasks to maximise time for more important problems. Experience with PostgreSQL or similar Rational Database. Experience with MongoDB or similar nosql database. Experience with Data Visualisation experience (via Tableau, Qlikview, SAS BI or similar) is preferable. Experience using task tracking systems e.g. Jira and distributed version control systems e.g. Git. Be comfortable explaining very technical concepts to non-expert people. Experience of project management and designing processes to deliver successful outcomes. Why work for us? Work with a unique a truly vast collection of datasets Above market remuneration Stimulating challenges that fully utilise your skills Work on real-world technical problems to which solution cannot simply be found on the internet Working alongside other passionate, talented engineers Hardware of your choice Our fast-growing company offers the potential for rapid career progression
Company Profile: Thrymr Software is an outsourced product development startup. Our primary development center is in Hyderabad, India with a team of about 100+ members across various technical roles. Thrymr is also in Singapore, Hamburg (Germany) and Amsterdam (Netherlands). Thrymr works with companies to take complete ownership of building their end to end products be it web or mobile applications or advanced analytics including GIS, machine learning or computer vision. http://thrymr.net Job Location Hyderabad, Financial District Job Description: As a Data Scientist, you will evaluate and improve products. You will collaborate with a multidisciplinary team of engineers and analysts on a wide range of problems. As a Data Scientist your responsibility will be (but not limited to) Design, benchmark and tune machine-learning algorithms,Painlessly and securely manipulate large and complex relational data sets, Assemble complex machine-learning strategies, Build new predictive apps and services Responsibilities: 1. Work with large, complex datasets. Solve difficult, non-routine analysis problems, applying advanced analytical methods as needed. Conduct end-to-end analysis that includes data gathering and requirements specification, processing, analysis, ongoing deliverables, and presentations. 2. Build and prototype analysis pipelines iteratively to provide insights at scale. Develop a comprehensive understanding of Google data structures and metrics, advocating for changes were needed for both products development and sales activity. 3. Interact cross-functionally with a wide variety of people and teams. Work closely with engineers to identify opportunities for, design, and assess improvements to various products. 4. Make business recommendations (e.g. cost-benefit, forecasting, experiment analysis) with effective presentations of findings at multiple levels of stakeholders through visual displays of quantitative information. 5. Research and develop analysis, forecasting, and optimization methods to improve the quality of uuser-facingproducts; example application areas include ads quality, search quality, end-user behavioral modeling, and live experiments. 6. Apart from the core data science work (subjected to projects and availability), you may also be required to contribute to regular software development work such as web, backend and mobile applications development. Our Expectations 1. You should have at least B.Tech/B.Sc degree or equivalent practical experience (e.g., statistics, operations research, bioinformatics, economics, computational biology, computer science, mathematics, physics, electrical engineering, industrial engineering). 2. You should have practical experience of working experience with statistical packages (e.g., R, Python, NumPy ) and databases (e.g., SQL). 3. You should have experience articulating business questions and using mathematical techniques to arrive at an answer using available data. Experience translating analysis results into business recommendations. 4. You should have strong analytical and research skills 5. You should have good academics 6. You will have to very proactive and submit your daily/weekly reports very diligently 7. You should be comfortable with working exceptionally hard as we are a startup and this is a high-performance work environment. 8. This is NOT a 9 to 5 kind of job, you should be able to work long hours. What you can expect 1. High-performance work culture 2. Short-term travel across the globe at very short notices 3. Accelerated learning (you will learn at least thrice as much compared to other companies in similar roles) and become a lot more technical 4. Happy go lucky team with zero politics 5. Zero tolerance for unprofessional behavior and substandard performance 6. Performance-based appraisals that can happen anytime with considerable hikes compared to single-digit annual hikes as the market standard
*Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions. *Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies. *Assess the effectiveness and accuracy of new data sources and data gathering techniques. *Develop custom data models and algorithms to apply to data sets. *Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting and other business outcomes.
Ketto is an online crowdfunding space where social causes, NGOs, creative projects and entrepreneurial ventures in India are supported. With a Supporter base of 5 Million, Ketto has raised over INR 300 Cr for about 1.5 Lakhs fundraisers. We are Asia's most trusted and visited crowdfunding platform that helps you raise funds for personal needs, charitable causes and creative ideas. Ketto uses social media to mobilize the youth, effecting change and fueling creative ideas. It makes online giving and fundraising easy and safe. With its strong celebrity support, Ketto aims to connect individuals and brands with fundraisers, increasing awareness for various causes - social, personal, creative and entrepreneurial. Ketto aims to be the premier crowdfunding platform that empowers the crowd to fund and raise funds for their favorite causes, creative projects and entrepreneurial ideas using social media and e-commerce. Through crowdfunding, we're constantly trying to bring about a positive change in the world, one fundraiser at a time. If you're the kind of person who loves playing with the numbers and likes to make a difference with your work, join in! Here's all that you need to know about this profile: You would be Responsible for: 1. Commander for all the Analytics problems at Ketto. 2. Building predictive models to solve various business problems. 3. Lots of data crunching, analyzing and presenting the data to the Senior Management. 4. Prepare various Data/Statistical Models and optimization for the same. 5. Maintaining consistency with data for the Product, Marketing, and Business Development teams. What we would want : 1. Advanced understanding of SQL, R, and machine learning concepts 2. 1-3 years experience as an analyst or a data scientist. 3. Passionate about translating data and trends into meaningful insights 4. Experience with data visualization software like Tableau, looker, PowerBI etc. 5. A good understanding of Statistics and Machine learning concepts. What you would get : 1. A high-end exposure to Product, Marketing, and Business Development teams. 2. An agile environment and a startup culture and chance to work with some of the coolest professionals of the industry. If the above-mentioned responsibilities excite you and if you want to be a key to tomorrow, then we would we excited to meet you.
Good knowledge of SQL , Microsoft Excel One Programming language in SAA/Python or R
HackerEarth provides enterprise software solutions that help organisations in their innovation management and talent assessment needs. HackerEarth Recruit is a talent assessment platform that helps in efficient technical talent screening allowing organisations to build strong, proficient teams. HackerEarth Sprint is an innovation management software that helps organisations drive innovation through internal and external talent pools, including HackerEarth’s global community of 2M+ developers. Today, HackerEarth serves 750+ organizations, including leading Fortune 500 companies from around the world. General Electric, IBM, Amazon, Apple, Wipro, Walmart Labs and Bosch are some of the brands that trust HackerEarth in helping them drive growth. Job Description We are looking for an ML Engineer that will help us discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver even better products. Your primary focus will be on applying data mining techniques, doing statistical analysis, and building high-quality models integrated with our products. You will be primarily working on recommendation engines, text classification, automated tagging of documents, lexical similarity, semantic similarity and similar problems to start with. Responsibilities Selecting features, building and optimizing classifiers using machine learning techniques Data mining using state-of-the-art methods Extending the company’s data with third-party sources of information when needed Enhancing data collection procedures to include information that is relevant for building analytic systems Processing, cleansing, and verifying the integrity of data used for analysis Doing the ad-hoc analysis and presenting results in a clear manner Creating automated anomaly detection systems and constant tracking of its performance Develop custom data models and algorithms to apply to data sets Assess the effectiveness and accuracy of new data sources and data gathering techniques Coordinate with different functional teams to implement models and monitor outcomes. Develop processes and tools to monitor and analyze model performance and data accuracy. Skills and Qualifications 4+ years of experience using statistical computer languages (R, Python, etc.) to manipulate data and draw insights from large data sets. Good applied statistics skills, such as distributions, statistical testing, regression, etc. Proficiency in using query languages such as SQL, Hive, Pig Experience with distributed data/computing tools: MapReduce, Hadoop, Hive, Spark, etc. Experience using web services: Redshift, S3, etc. Experience creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modelling, clustering, decision trees, neural networks, etc. Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest, Boosting, Trees, text mining, social network analysis, etc. Experience working with and creating data architectures. Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks.. Excellent written and verbal communication skills for coordinating across teams. A drive to learn and master new technologies and techniques. You should be creative, enthusiastic, and take pride in the work that you produce. Above all, you should love to build and ship solutions that real people will use every day.
JD: Responsibilities: $ Detailed qualitative and quantitative assessment of Social Media Data $ Prepare detailed report with matrices and findings $ Using the proprietary Platform in conjunction with other tools to create various Ad-hoc reports $ Creating Report Dashboards using Excel, SQL, Powerpoint Skills required: $ Experience with Advanced Excel and PowerPoint $ Adept at writing complex SQL queries $ Understand the social and web metrics, and its application in the Digital Marketing universe $ Good analytical skill, data analysis and reporting $ Highly organized, details orientated and good coordination skill $ Strong verbal, written and presentation skills [Experience with Real-life unstructured data, Data Analysis & Visualization Tools (Python/R etc) would be an added advantage]
The Candidate should be have: - good understanding of Statistical concepts - worked on Data Analysis and Model building for 1 year - ability to implement Data warehouse and Visualisation tools (IBM, Amazon or Tableau) - use of ETL tools - understanding of scoring models The candidate will be required: - to build models for approval or rejection of loans - build various reports (standard for monthly reporting) to optimise business - implement datawarehosue The candidate should be self-starter as well as work without supervision. You will be the 1st and only employee for this role for the next 6 months.
We're an early stage film-tech startup with a mission to empower filmmakers and independent content creators with data-driven decision-making tools. We're looking for a data person to join the core team. Please get in touch if you would be excited to join us on this super exciting journey of disrupting the film production and distribution business. We are currently collaborating with Rana Daggubatt's Suresh Productions, and work out of their studio in Hyderabad - so exposure and opportunities to work on real issues faced by the media industry will be in plenty.
Simplilearn.com is the world’s largest professional certifications company and an Onalytica Top 20 influential brand. With a library of 400+ courses, we've helped 500,000+ professionals advance their careers, delivering $5 billion in pay raises. Simplilearn has over 6500 employees worldwide and our customers include Fortune 1000 companies, top universities, leading agencies and hundreds of thousands of working professionals. We are growing over 200% year on year and having fun doing it. Description We are looking for candidates with strong technical skills and proven track record in building predictive solutions for enterprises. This is a very challenging role and provides an opportunity to work on developing insights based Ed-Tech software products used by large set of customers across globe. It provides an exciting opportunity to work across various advanced analytics & data science problem statement using cutting-edge modern technologies collaborating with product, marketing & sales teams. Responsibilities • Work on enterprise level advanced reporting requirements & data analysis. • Solve various data science problems customer engagement, dynamic pricing, lead scoring, NPS improvement, optimization, chatbots etc. • Work on data engineering problems utilizing our tech stack - S3 Datalake, Spark, Redshift, Presto, Druid, Airflow etc. • Collect relevant data from source systems/Use crawling and parsing infrastructure to put together data sets. • Craft, conduct and analyse A/B experiments to evaluate machine learning models/algorithms. • Communicate findings and take algorithms/models to production with ownership. Desired Skills • BE/BTech/MSc/MS in Computer Science or related technical field. • 2-5 years of experience in advanced analytics discipline with solid data engineering & visualization skills. • Strong SQL skills and BI skills using Tableau & ability to perform various complex analytics in data. • Ability to propose hypothesis and design experiments in the context of specific problems using statistics & ML algorithms. • Good overlap with Modern Data processing framework such as AWS-lambda, Spark using Scala or Python. • Dedication and diligence in understanding the application domain, collecting/cleaning data and conducting various A/B experiments. • Bachelor Degree in Statistics or, prior experience with Ed-Tech is a plus
Proficiency using R / Python for predictive modelling, pattern recognition, and algorithm prototyping. • Should have exposure to appropriate tools in a Linux environment • Can apply probability models and machine learning approaches to solving complex problems • Adept at data manipulation, transformation, and decomposition • Identify key variables, parameters and elements defining a problem or its solution. • Can distill highly technical knowledge and techniques to collaborators outside the problem domain. • Java and/or Scala programming is a plus • Should be comfortable understanding business domain problems and formulating appropriate algorithms • Should be able to explain complex problems and solutions to business stakeholders and non- data science engineers
If you enjoy the challenges of working at a start-up and passionate about meeting new people and building strong relations then keep reading. eKincare has developed India’s first AI based personal health assistant. eKincare’s patent pending technology reads medical data from health records and various healthcare interventions to predict future health risks and provide timely personalised recommendations to beat those risks. Recognised among the “100 most innovative digital healthcare companies in the world” by the journal of mHealth, we are leveraging technology and data science to make healthcare simple. See our story @ https://yourstory.com/2016/04/ekincare-growth-story/ If you're an energetic, street smart, creative, hungry "crushing it" kind of professional and are interested in truly making a difference in Indian healthcare then apply now! Before it's too late! • We expect you to work on complex, cross-functional analytical and research-oriented projects using advanced computational, machine learning and deep learning algorithms. • You will be responsible for developing proprietary handling large amount of structured and unstructured data • You will be expected to use relevant knowledge of computer science fundamentals, and machine learning to help build scalable analytical solutions • You will ensure that the data science team works closely with our technical team for design and development of the analytical solutions Pre-requisites and skillsets for this role: • 4+ years of experience in applying concepts in Data Science, Machine Learning, Algorithm development, Advanced Computing or Statistical Modeling to solve real-world problems. • Familiarity with python (Jupyter notebook) & its frameworks in – scikit learn, Keras, Pytorch. • Strong modelling skills and ability to build practical models using advanced algorithms such as Random Forests, Neural Networks etc., • Undergraduate degree from premier institutions such as IIT, BITS, NIT etc. • Strong communication skills enabling you to clearly communicate your thoughts/work to external or internal teams
Crediwatch is a leading fintech organization working with leading national and international financial institutions helping them in the space of analytics, credit appraisal, risk profiling, lead generation, early warning systems. We are looking to expand our data science team who will work on cutting edge platforms and help enhance Crediwatch's proprietary deep learning models across formats for text, video, images, audio. Looking for candidates who have a hands on understanding of the various industry standard ML platforms and tools to build models at scale.
1 to 3 years of experience in product analytics - Highly conversant with Google Analytics and other similar tools - Basic programming ability(preferably R or Python)
4+ years of experience in applied research or industry work experience ● Degree in statistics, applied mathematics, machine learning, or other highly quantitative experience ● Experience in working with technologies and tools like R, Graphlab, Hadoop, Hive, Spark, Pig ● Coding proficiency in at-least one language like python or Java
We are looking for highly passionate and enthusiastic players for solving problems in medical data analysis using a combination of image processing, machine learning and deep learning. As a Senior Computer Scientist at SigTuple you will have the onus of creating and leveraging the state-of-the-art algorithms in machine learning, image processing and AI which will impact billions of people across the world by creating healthcare solutions that are accurate and affordable. You will collaborate with our current team of super awesome geeks in cracking super complex problems in a simple way by creating experiments, algorithms and prototypes that not only yield high-accuracy but are also designed and engineered to scale. We believe in innovation - needless to say that you will be part of creating intellectual properties like patents and contributing to the research communities by publishing papers - it is something that we value the most What we are looking for: · Hands on experience along with a strong understanding of foundational algorithms in either machine learning, computer vision or deep learning. Prior experience of applying these techniques on images and videos would be good-to-have. · Hands on experience in building and implementing advanced statistical analysis and machine learning and data mining algorithms. · Programming experience in C, C++, Python What should you have: · 2 - 5 years of relevant experience in solving problems using machine learning or computer vision · Bachelor degree or Master degree or PhD in computer science or related fields. · Be an innovative and creative thinker, somebody who is not afraid to try something new and inspire others to do so. · Thrive in a fast-paced and fun environment. · Work with a bunch of data scientist geeks and disruptors striving for a big cause. What SigTuple can offer: You will be working with an incredible team of smart & supportive people, driven by a common force to change things for the better. With an opportunity to deliver high-calibre mobile and desktop solutions integrated with hardware that will transform healthcare ground up, there will ultimately be different challenges for you to face. Sufficient to say that if you thrive in these environments, the buzz alone will keep you energized. In short you will snag a place at the table of one of the most vibrant start-ups in the industry!!
Drive the creation of new models and capabilities that will leapfrog traditional bureau-based modelling done at most major financial institutions leveraging a wide array of data from both traditional and non-traditional sources. ● Acquire deep understanding of MSME lending domain ● Perform data integration, manipulation, and querying for purposes of reporting and more sophisticated analytics ● Engage in regular problem solving sessions with overall leadership team to present findings and refine their own analytics plan. ● Manage data analysis to develop fact-based recommendations for innovation projects. ● Mine Big Data and other unstructured data to tap untouched data sources and deliver insight into new and emerging solutions. ● Remain current on new developments in data analytics, Big Data, predictive analytics, machine learning and technology
Primary Skills : - B.Tech/MS/PhD degree in Computer Science, Computer Engineering or related technical discipline with 3-4 years of industry experience in Data Science. - Proven experience of working on unstructured and textual data. Deep understanding and expertise of NLP techniques (POS tagging, NER, Semantic role labelling etc). - Experience working with some of the supervised/unsupervised learning ML models such as linear/logistic regression, clustering, support vector machines (SVM), neural networks, Random Forest, CRF, Bayesian models etc. The ideal candidate will have a wide coverage of the different methods/models, and an in depth knowledge of some. - Strong coding experience in Python, R and Apache Spark. Python Skills are mandatory. - Experience with NoSQL databases, such as MongoDB, Cassandra, HBase etc. - Experience of working with Elastic search is a plus. - Experience of working on Microsoft Azure is a plus although not mandatory. - Basic knowledge of Linux and related scripting like Bash/shell script. Role Description (Roles & Responsibilities) : - Candidate will research, design and implement state-of-the-art ML systems using predictive modelling, deep learning, natural language processing and other ML techniques to help meeting business objectives. - Candidate will work closely with the product development/Engineering team to develop solutions for complex business problems or product features. - Handle Big Data scale for training and deploying ML/NLP based business modules/chatbots.