● Research and develop advanced statistical and machine learning models for
analysis of large-scale, high-dimensional data.
● Dig deeper into data, understand characteristics of data, evaluate alternate
models and validate hypotheses through theoretical and empirical approaches.
● Productize has proven or working models into production-quality code.
● Collaborate with product management, marketing, and engineering teams in
Business Units to elicit & understand their requirements & challenges and
develop potential solutions
● Stay current with the latest research and technology ideas; share knowledge by
clearly articulating results and ideas to key decision-makers.
● File patents for innovative solutions that add to the company's IP portfolio
Requirements
● 4 to 6 years of strong experience in data mining, machine learning and
statistical analysis.
● BS/MS/Ph.D. in Computer Science, Statistics, Applied Math, or related areas
from Premier institutes ( only IITs / IISc / BITS / Top NITs or top US university
should apply)
● Experience in productizing models to code in a fast-paced start-up
environment.
● Fluency in analytical tools such as Matlab, R, Weka etc.
● Strong intuition for data and Keen aptitude on large scale data analysis
● Strong communication and collaboration skills.
About Bidgely
Similar jobs
Hi We are Looking foe Data Science AI Professional role for our KPHB, Hyderabad Branch Requirements:
• Min 6 months to 1 Year experience in Data Science AI
• Need to have proven Experience with Good GitHub profile and projects
• Need to good with Data Science & AI Concepts
• Need to be good with Python, ML, Stats, Deep Learning, NLP, OpenCV etc
• Good Communication and presentation skills
About Quidich
Quidich Innovation Labs pioneers products and customized technology solutions for the Sports Broadcast & Film industry. With a mission to bring machines and machine learning to sports, we use camera technology to develop services using remote controlled systems like drones and buggies that add value to any broadcast or production. Quidich provides services to some of the biggest sports & broadcast clients in India and across the globe. A few recent projects include Indian Premier League, ICC World Cup for Men and Women, Kaun Banega Crorepati, Bigg Boss, Gully Boy & Sanju.
What’s Unique About Quidich?
- Your work will be consumed by millions of people within months of your joining and will impact consumption patterns of how live sport is viewed across the globe.
- You work with passionate, talented, and diverse people who inspire and support you to achieve your goals.
- You work in a culture of trust, care, and compassion.
- You have the autonomy to shape your role, and drive your own learning and growth.
Opportunity
- You will be a part of world class sporting events
- Your contribution to the software will help shape the final output seen on television
- You will have an opportunity to work in live broadcast scenarios
- You will work in a close knit team that is driven by innovation
Role
We are looking for a tech enthusiast who can work with us to help further the development of our Augmented Reality product, Spatio, to keep us ahead of the technology curve. We are one of the few companies in the world currently offering this product for live broadcast. We have a tight product roadmap that needs enthusiastic people to solve problems in the realm of software development and computer vision systems. Qualified candidates will be driven self-starters, robust thinkers, strong collaborators, and adept at operating in a highly dynamic environment. We look for candidates that are passionate about the product and embody our values.
Responsibilities
- Working with the research team to develop, evaluate and optimize various state of the art algorithms.
- Deploying high performance, readable, and reliable code on edge devices or any other target environments.
- Continuously exploring new frameworks and identifying ways to incorporate those in the product.
- Collaborating with the core team to bring ideas to life and keep pace with the latest research in Computer Vision, Deep Learning etc.
Minimum Qualifications, Skills and Competencies
- B.E/B.Tech or Masters in Computer Science, Mathematics or relevant experience
- 3+ years of experience in computer vision algorithms like - sfm/SLAM, optical flow, visual-inertial odometry
- Experience in sensor fusion (camera, imu, lidars) and in probabilistic filters - EKF, UKF
- Proficiency in programming - C++ and algorithms
- Strong mathematical understanding - linear algebra, 3d-geometry, probability.
Preferred Qualifications, Skills and Competencies
- Proven experience in optical flow, multi-camera geometry, 3D reconstruction
- Strong background in Machine Learning and Deep Learning frameworks.
Reporting To: Product Lead
Joining Date: Immediate (Mumbai)
Hypersonix.ai - Data Scientist
As a Data Scientist with Hypersonix (https://hypersonix.ai/), you will play a key role in translating data into insights for our clients. You will design, develop and implement processes and framework into our AI- Platform, that will help our clients make sense of the data they generate, and consume the insights to make informed decisions.
Role – Analytics
Job Responsibilities
Solve business problems & develop a business solution: Use problem-solving methodologies to propose creative solutions to solve a business problem. Recommend design and develop state-of-the-art data-driven analysis using statistical & advanced analytics methodologies to solve business problems. Develop models & recommend insights. Form hypothesis and run experiments to gain empirical insights and validate the hypothesis. Identify and eliminate possible obstacles and identify an alternative creative solution.
- Experience in design and review of new solution concepts and leading the delivery of high-impact analytics solutions and programs for global clients
- Identify opportunities and partner with key stakeholders to set priorities, manage expectations, facilitate change required to activate insights, and measure the impact
- Deconstruct problems and goals to form a clear picture for hypothesis generation and use best practices around decision science approaches and technology to solve business challenges;
- Integrate custom analytical solutions (e.g., predictive modeling, segmentation, issue tree frameworks) to support data-driven decision-making;
- Translate and communicate results, recommendations, and opportunities to improve data solutions to internal and external leadership with easily consumable reports and presentations.
- Should be able to apply domain knowledge to functional areas like market size estimation, business growth strategy, strategic revenue management, marketing effectiveness
- Have business acumen to manage revenues profitably and meet financial goals consistently. Able to quantify business value for clients and create win-win commercial propositions.
- Good thought leadership & ability to structure & solve business problems, innovating, where required
- Must have the ability to adapt to changing business priorities in a fast-paced business environment
Technical Expertise
- Should have the ability to handle structured /unstructured data and have prior experience in loading, validating, and cleaning various types of data
- Should have a very good understanding of data structures and algorithms
- Experience leading and working independently on end-to-end projects in a fast-paced environment is strongly preferred
- Advanced knowledge of SQL/redshift with proficiency with Python/R
- Sound knowledge of advanced analytics and machine learning techniques such as segmentation/clustering, recommendation engines, propensity models, and forecasting to drive growth throughout the customer lifecycle. Should be able to evaluate and bring in new advanced techniques to enhance the value-add for clients
- Research and develop statistical learning models for data analysis
- Collaborate with product management and engineering departments to understand company needs and devise possible solutions
- Keep up-to-date with latest technology trends
- Communicate results and ideas to key decision makers
- Implement new statistical or other mathematical methodologies as needed for specific models or analysis
- Optimize joint development efforts through appropriate database use and project design
Qualifications/Requirements:
- Masters or PhD in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background
- Excellent understanding of machine learning techniques and algorithms, including clustering, anomaly detection, optimization, neural network etc
- 3+ years experiences building data science-driven solutions including data collection, feature selection, model training, post-deployment validation
- Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning models
- Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow
- Good team worker with excellent communication skills written, verbal and presentation
Desired Experience:
- Experience with AWS, S3, Flink, Spark, Kafka, Elastic Search
- Knowledge and experience with NLP technology
- Previous work in a start-up environment
We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Roles and Responsibilities:
- Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
- Develop programs in Scala and Python as part of data cleaning and processing.
- Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.
- Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
- Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
- Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Provide high operational excellence guaranteeing high availability and platform stability.
- Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Skills:
- Experience with Big Data pipeline, Big Data analytics, Data warehousing.
- Experience with SQL/No-SQL, schema design and dimensional data modeling.
- Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
- Experience in designing systems that process structured as well as unstructured data at large scale.
- Experience in AWS/Spark/Java/Scala/Python development.
- Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
- Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
- Prior exposure to streaming data sources such as Kafka.
- Should have knowledge on Shell Scripting and Python scripting.
- High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
- Experience with NoSQL databases such as Cassandra / MongoDB.
- Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
- Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
- Experience building and deploying applications on on-premise and cloud-based infrastructure.
- Having a good understanding of machine learning landscape and concepts.
Qualifications and Experience:
Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.
Certifications:
Good to have at least one of the Certifications listed here:
AZ 900 - Azure Fundamentals
DP 200, DP 201, DP 203, AZ 204 - Data Engineering
AZ 400 - Devops Certification
About Us |
|
upGrad is an online education platform building the careers of tomorrow by offering the most industry-relevant programs in an immersive learning experience. Our mission is to create a new digital-first learning experience to deliver tangible career impact to individuals at scale. upGrad currently offers programs in Data Science, Machine Learning, Product Management, Digital Marketing, and Entrepreneurship, etc. upGrad is looking for people passionate about management and education to help design learning programs for working professionals to stay sharp and stay relevant and help build the careers of tomorrow.
|
Object-oriented languages (e.g. Python, PySpark, Java, C#, C++ ) and frameworks (e.g. J2EE or .NET)
bachelor’s degree or equivalent experience
● Knowledge of database fundamentals and fluency in advanced SQL, including concepts
such as windowing functions
● Knowledge of popular scripting languages for data processing such as Python, as well as
familiarity with common frameworks such as Pandas
● Experience building streaming ETL pipelines with tools such as Apache Flink, Apache
Beam, Google Cloud Dataflow, DBT and equivalents
● Experience building batch ETL pipelines with tools such as Apache Airflow, Spark, DBT, or
custom scripts
● Experience working with messaging systems such as Apache Kafka (and hosted
equivalents such as Amazon MSK), Apache Pulsar
● Familiarity with BI applications such as Tableau, Looker, or Superset
● Hands on coding experience in Java or Scala
Deep Learning Coputer VIsion Data Scientist
Senior Data Scientist
DataWeave provides Retailers and Brands with “Competitive Intelligence as a Service” that enables them to take key decisions that impact their revenue. Powered by AI, we provide easily consumable and actionable competitive intelligence by aggregating and analyzing billions of publicly available data points on the Web to help businesses develop data-driven strategies and make smarter decisions.
Data [email protected]
We the Data Science team at DataWeave (called Semantics internally) build the core machine learning backend and structured domain knowledge needed to deliver insights through our data products. Our underpinnings are: innovation, business awareness, long term thinking, and pushing the envelope. We are a fast paced labs within the org applying the latest research in Computer Vision, Natural Language Processing, and Deep Learning to hard problems in different domains.
How we work?
It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale!
What do we offer?
- Some of the most challenging research problems in NLP and Computer Vision. Huge text and image datasets that you can play with!
- Ability to see the impact of your work and the value you're adding to our customers almost immediately.
- Opportunity to work on different problems and explore a wide variety of tools to figure out what really excites you.
- A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible working hours.
- Learning opportunities with courses and tech conferences. Mentorship from seniors in the team.
- Last but not the least, competitive salary packages and fast paced growth opportunities.
Who are we looking for?
The ideal candidate is a strong software developer or a researcher with experience building and shipping production grade data science applications at scale. Such a candidate has keen interest in liaising with the business and product teams to understand a business problem, and translate that into a data science problem. You are also expected to develop capabilities that open up new business productization opportunities.
We are looking for someone with 6+ years of relevant experience working on problems in NLP or Computer Vision with a Master's degree (PhD preferred).
Key problem areas
- Preprocessing and feature extraction noisy and unstructured data -- both text as well as images.
- Keyphrase extraction, sequence labeling, entity relationship mining from texts in different domains.
- Document clustering, attribute tagging, data normalization, classification, summarization, sentiment analysis.
- Image based clustering and classification, segmentation, object detection, extracting text from images, generative models, recommender systems.
- Ensemble approaches for all the above problems using multiple text and image based techniques.
Relevant set of skills
- Have a strong grasp of concepts in computer science, probability and statistics, linear algebra, calculus, optimization, algorithms and complexity.
- Background in one or more of information retrieval, data mining, statistical techniques, natural language processing, and computer vision.
- Excellent coding skills on multiple programming languages with experience building production grade systems. Prior experience with Python is a bonus.
- Experience building and shipping machine learning models that solve real world engineering problems. Prior experience with deep learning is a bonus.
- Experience building robust clustering and classification models on unstructured data (text, images, etc). Experience working with Retail domain data is a bonus.
- Ability to process noisy and unstructured data to enrich it and extract meaningful relationships.
- Experience working with a variety of tools and libraries for machine learning and visualization, including numpy, matplotlib, scikit-learn, Keras, PyTorch, Tensorflow.
- Use the command line like a pro. Be proficient in Git and other essential software development tools.
- Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
- Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
- It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.
Role and responsibilities
- Understand the business problems we are solving. Build data science capability that align with our product strategy.
- Conduct research. Do experiments. Quickly build throw away prototypes to solve problems pertaining to the Retail domain.
- Build robust clustering and classification models in an iterative manner that can be used in production.
- Constantly think scale, think automation. Measure everything. Optimize proactively.
- Take end to end ownership of the projects you are working on. Work with minimal supervision.
- Help scale our delivery, customer success, and data quality teams with constant algorithmic improvements and automation.
- Take initiatives to build new capabilities. Develop business awareness. Explore productization opportunities.
- Be a tech thought leader. Add passion and vibrance to the team. Push the envelope. Be a mentor to junior members of the team.
- Stay on top of latest research in deep learning, NLP, Computer Vision, and other relevant areas.